HomeCalendarFAQSearchRegisterMemberlistUsergroupsLog in

Share | 

 Client/Server Software Testing

Go down 

Number of posts : 1850
Age : 55
Registration date : 2008-03-08

PostSubject: Client/Server Software Testing   Fri 11 Apr - 15:15

I: Introduction to Client/Server architecture:

Client/Server system development is the preferred method of constructing cost-effective department- and enterprise-level strategic corporate information systems. It allows the rapid deployment of information systems in end-user environments.

1: What is Client/Server Computing?

Client/Server computing is a style of computing involving multiple processors, one of which is typically a workstation and across which a single business transaction is completed [1]. Client/Server computing recognizes that business users, and not a mainframe, are the center of a business. Therefore, Client/Server is also called “client-centric” computing.

Today, Client/Server computing is extended to the Internet—netcentric computing (network centric computing), the concepts of business users have expanded greatly. Forrester Report describes the netcentric computing as “Remote servers and clients cooperating over the Internet to do work” and says that Internet Computing extends and improves the Client/Server model [2].

The characteristics of Client/Server computing includes: 1. There are multiple processors. 2. A complete business transaction is processed across multiple servers

Netcentric computing ---- as an evolution of Client/Server model, has brought new technology to the forefront, especially in the area of external presence and access, ease of distribution, and media capabilities. Some of new technologies are [3]:

Browser, which provides a “universal client”. In the traditional Client/Server environment, distributing an application internally or externally for an enterprise requires that the application be recompiled and tested for all specific workstation platforms (operating systems). It also usually requires loading the application on each client machine. The browser-centric application style offers an alternative to this traditional problem. The web browser provides a universal client that offers users a consistent and familiar user interface. Using a browser, a user can launch many types of applications and view many types of documents. This can be accomplished on different operating systems and is independent of where the applications or documents reside. Direct supplier-to-customer relationships. The external presence and access enabled by connecting a business node to the Internet has opened up a series of opportunities to reach an audience outside a company’s traditional internal users. Richer documents. Netcentric technologies (such as HTML, documents, plug-ins, and Java) and standardization of media information formats enable support for complex documents, applications and even nondiscrete data types such as audio and video. Application version checking and dynamic update. The configuration management of traditional Client/Server applications, which tend to be stored on both the client and server sides, is a major issue for many corporations. Netcentric computing can checking and update application versions dynamically.

2: Architectures for Client/Server System.

Both traditional Client/Server as well as netcentric computing are tiered architectures. In both cases, there is a distribution of presentation services, application code, and data across clients and servers. In both cases, there is a networking protocol that is used for communication between clients and servers. In both cases, they support a style of computing where processes on different machines communicate using messages. In this style, the “client” delegates business functions or other tasks (such as data manipulation logic) to one or more server processes. Server processes respond to messages from clients.

A Client/Server system has several layers, which can be visualized in either a conceptual or a physical manner. Viewed conceptually, the layers are presentation, process, and database. Viewed physically, the layers are server, client, middleware, and network.

2.1. Client/Server 2-tiered architecture:

2-tiered architecture is also known as the client-centric model, which implements a “fat” client. Nearly all of the processing happens on the client, and client accesses the database directly rather than through any middleware. In this model, all of the presentation logic and the business logic are implemented as processes on the client.

2-tiered architecture is the simplest one to implement. Hence, it is the simplest one to test. Also, it is the most stable form of Client/Server implementation, making most of the errors that testers find independent of the implementation. Direct access to the database makes it simpler to verify the test results.

The disadvantage of this model is the limit of the scalability and difficulties for maintenance. Because it doesn’t partition the application logic very well, changes require reinstallation of the software on all of the client desktops.

2.2. Modified 2-tiered architecture:

Because of the nightmare of maintenance of the 2-tiered Client/Server architecture, the business logic is moved to the database side, implemented using triggers and procedures. This kind of model is known as modified 2-tiered architecture.

In terms of software testing, modified 2-tiered architecture is more complex than 2-tiered architecture for the following reasons: It is difficult to create a direct test of the business logic. Special tools are required to implement and verify the tests. It is possible to test the business logic from the GUI, but there is no way to determine the numbers of procedures and/or triggers that fires and create intermediate results before the end product is achieved. Another complication is dynamic database queries. They are constructed by the application and exist only when the program needs them. It is very difficult to be sure that the test generates a query “correctly”, or as expected. Special utilities that show what is running in memory must be used during the tests.

2.3. 3-tiered architecture:

For 3-tiered architecture, the application is divided into a presentation tier, a middle tier, and a data tier. The middle tier is composed of one or more application servers distributed across one or more physical machines. This architecture is also termed the “the thin client—fat server” approach. This model is very complicated for testing because the business and/or data objects can be invoked from many clients, and the objects can be partitioned across many servers. The characteristics make the 3-tiered architecture desirable as a development and implementation framework at the same time make testing more complicated and tricky.

3: Critical Issues Involved in Client/Server System Management:

Hurwitz Consulting Group, Inc. has provided a framework for managing Client/Server systems that identifies eight primary management issues [4]:

Performance Problem Software distribution Configuration and administration Data and storage Operations Security License

II Client/Server Software Testing:

Software testing for Client/Server systems (Desktop or Webtop) presents a new set of testing problems, but it also includes the more traditional problems testers have always faced in the mainframe world. Atre describes the special requirements of Client/Server testing [5]: The client’s user interface The client’s interface with the server The server’s functionality The network (the reliability and performance of the network)

Introduction to the Client/Server Software Testing:

We can view the Client/Server software testing from different perspectives:

From a “distributed processing” perspective: Since Client/Server is a form of distributed processing, it is necessary to consider its testing implication from that point of view. The term “distributed” implies that data and processes are dispersed across various and miscellaneous platforms. Binder states several issues needed to be considered in the Client/Server environments [6]. Client GUI considerations Target environment and platform diversity considerations Distributed database considerations (including replicated data) Distributed processing considerations (including replicated processes) Nonrobust target environment Nonlinear performance relationships From a cross-platform perspective: The networked cross-platform nature of Client/Server systems requires that we pay much more attention to configuration testing and compatibility testing. The purpose of configuration testing is to uncover the weakness of the system operated in the different known hardware and software environments. The purpose of comparability testing is to find any functionally inconsistency of the interface across hardware and software. From a cross-window perspective: The current proliferation of Microsoft Windows environments has created a number of problems for Client/Server developers. For example, Windows 3.1 is a 16-bit environment, and Window 95 and Window NT are 32-bit environment. Mixing and matching 16- bit and 32-bit code/16bits or 32bits systems and products causes major problems. Now there exit some automated tools that can generate both 16-bit and 32-bit test scripts.

2. Testing Plan for Client/Server Computing:

In many instances, testing Client/Server software cannot be planned from the perspective of traditional integrated testing activities because this view either is not applicable at all or is too narrow, and other dimensions must be considered. The following are some specific considerations needing to be addressed in a Client/Server testing plan. Must include consideration of the different hardware and software platforms on which the system will be used. Must take into account network and database server performance issues with which mainframe systems did not have to deal. Has to consider the replication of data and processes across networked servers

See attached “Client/Server test plan based on application functionality” [7].

In the test plan, we may address or construct several different kinds of testing: The system test plan: System test scenarios are a set of test scripts, which reflect user behaviors in a typical business situation. It’s very important to identify the business scenarios before constructing the system test plan.

See attached CASE STUDY: The business scenarios for the MFS imaging system

The user acceptance test plan: The user acceptance test plan is very similar to the system test plan. The major difference is direction. The user acceptance test is designed to demonstrate the major system features to the user as opposed to finding new errors.

See attached CASE STUDY: Acceptance test specification for the MFS imaging system

The operational test plan: It guides the single user testing of the graphical user interface and of the system function. This plan should be constructed according to subsection A and B of Section II in the testing plan template -- Client/Server test plan based on application functionality. (See attached Appendix I)

The regression test plan: The regression test plan occurs at two levels. In Client/Server development, regression testing happens between builds. Between system releases, regression testing also occurs postproduction. Each new build/release must be tested for three aspects: To uncover errors introduced by the fix into previously correct function. To uncover previously reported errors that remain. To uncover errors in the new functionality.

Multiuser performance test plan: It is necessary to be performed in order to uncover any unexpected system performance problem under load. This test plan should be constructed form Section V of the testing plan template-- Client/Server test plan based on application functionality. (See attached Appendix I)

3. Client/Server Testing in Different Layers:

3.1. Testing on the Client Side—Graphic User Interface Testing:

3.1.1 The complexity for Graphic User Interface Testing is due to: Cross-platform nature: The same GUI objects may be required to run transparently (provide a consistent interface across platforms, with the cross-platform nature unknown to the user) on different hardware and software platforms Event-driven nature: GUI-base applications have increased testing requirements because they are in an event-driven environment where user actions are events that determine the application’s behavior. Because the number of available user actions is very high, the number of logical paths in the supporting program code is also very high. The mouse, as an alternate method of input, also raises some problems. It is necessary to assure that the application handles both mouse input and keyboard input correctly. The GUI testing also requires testing for the existence of a file that provides supporting data/information for text objects. The application must be sensitive to the existence, or nonexistence. In many cases, GUI testing also involves the testing of the function that allows end-users to customize GUI objects. Many GUI development tools give the users the ability to define their own GUI objects. The ability to do this requires the underlying application to be able to recognize and process events related to these custom objects.

3.1.2 GUI testing techniques: Many traditional software testing techniques can be used in GUI testing.

Review techniques such as walkthroughs and inspections [8]. These human testing procedures have been found to be very effective in the prevention and early correction of errors. It has been documented that two-thirds of all of the errors in finished information systems are the results of logic flaws rather than poor coding [9]. Preventive testing approaches, such as walkthroughs and inspections can eliminate the majority of these analysis and design errors before they go through to the production system.

Data validation techniques: Some of the most serious errors in software systems have been the result of inadequate or missing input validation procedures. Software testing has powerful data validation procedures in the form of the Black Box techniques of Equivalence Partitioning, Boundary Analysis, and Error Guessing. These techniques are also very useful in GUI testing.

Scenario testing: It is a system-level Black Box approach that also assure good White Box logic-level coverage for Client/Server systems.

The decision logic table (DLT): DLT represents an external view of the functional specification that can be used to supplement scenario testing from a logic-coverage perspective. In DLTs, each logical condition in the specification becomes a control path in the finished system. Each rule in the table describes a specific instance of a pathway that must be implemented. Hence, test cases based on the rules in a DLT provide adequate coverage of the module’s logic independent of its coded implementation.

In addition to these traditional testing techniques, a number of companies have begun producing structured capture/playback testing tools that address the unique properties of GUIs. The difference between traditional capture/playback and structured capture/playback paradigm is that capture/playback occurs at an external level. It records input as keystrokes or mouse actions and output as screen images that are saved and compared against inputs and output images of subsequent tasks.

Structured capture/playback is based on an internal view of external activities. The application program’s interactions with the GUI are recorded as internal ‘events” that can be saved as “scripts” written in some certain language.

3.2 Testing on the Server Side---Application Testing:

There are several situations that scripts can be designed to invoke during several tests: load testing, volume tests, stress tests, performance tests, and data-recovery tests.
Back to top Go down
View user profile
Client/Server Software Testing
Back to top 
Page 1 of 1
 Similar topics
» Need help with IMX Software
» CDAX software dilemma
» Colombo Crimes Division raids banking firm for software piracy
» CSE related software
» deleted SOFTWARE

Permissions in this forum:You cannot reply to topics in this forum
Job98456 :: General Software-
Jump to: