This website includes Education Information like a programming language, job interview question, general knowledge.mathematics

Education log

PageNavi Results No.

Ads

Wednesday, December 4, 2019

integration testing in software engneering

        integration testing in software engneering





INTEGRATION TESTING :


Integration testing is carried out after all (or at least some of) the modules have been unit tested. Successful completion of unit testing, to a large extent, ensures that the unit (or module) as a whole works satisfactorily. In this context, the objective of integration testing is to detect the errors at the module interfaces (call parameters). For example, it is checked that no parameter mismatch occurs when one module invokes the functionality of another module. Thus, the primary objective of integration testing is to test the module interfaces, 1.e., there are no errors in parameter passing, when one module invokes the functionality of another module. follow the previous debugging in software engineering tutorial what is debugginghttps://www.education-guru.us/2019/12/what-is-debugging.html




integration testing in software engneering






The objective of integration testing is to check whether the different modules of a program interface with each other properly. During integration testing, different modules of a system are integrated in a planned manner using an integration plan. The integration plan specifies the steps and the order in which modules are combined to realize
the full system. After each integration step, the partially integrated system is tested. An important factor that guides the integration plan is the module dependency graph. We have already discussed in Chapter 6 that a
structure chart (or module dependency graph) specifies the order in which different modules call each other. Thus, by examining the structure chart,
the integration plan can be developed. Anyone (or a mixture) of the following approaches can be used to develop the test plan:



• Big-bang approach to integration testing

 • Top-down approach to integration testing

 • Bottom-up approach to integration testing

• Mixed (also called sandwiched) approach to integration testing In the following subsections, we provide an overview of these approaches to integration testing




Big-bang approach to integration testing :

Big-bang testing is the most obvious approach to integration testing. In this approach, all the modules making up a system are integrated in a single step. In simple words, all the unit tested modules of the system are simply linked together and tested. However, this technique can meaningfully be used only for very small systems. The main problem with this approach is that once a failure has been detected during integration testing, it is very difficult to localize the error as the error may potentially lie in any of the modules. Therefore, debugging errors reported during big-bang integration testing are very expensive to fix. As a result, big-bang integration testing is almost never used for large programs.




Bottom-up approach to integration testing :

 Large software products are often made up of several subsystems. A subsystem might consist of many modules that communicate among each other through well-defined interfaces.In bottom-up integration testing, first the modules for each subsystem are integrated. Thus, the subsystems can be integrated separately and independently. The primary purpose of carrying out the integration testing a subsystem is to test whether the interfaces among various modules making up the subsystem work satisfactorily. The test cases must be carefully chosen to exercise the interfaces in all possible manners.


In pure bottom-up testing no stubs are required, and only test-drivers are required. Large software systems normally require several levels of subsystem testing, lower-level subsystems are successively combined to form higher-level subsystems. The principal advantage of bottom-up integration testing is that several disjoint subsystems
can be tested simultaneously. Another advantage of bottom-up testing is that the low-level modules get tested thoroughly since they are exercised in each integration step. Since the low-level modules do I/O and other critical functions, testing the low-level modules thoroughly increases the reliability of the system. A disadvantage of bottom-up testing is the complexity that occurs when the system is made up of a large number of small subsystems that are at the same level. This extreme case corresponds to the big-bang approach.


Top-down approach to


integration testing :



Top-down integration testing starts with the root module in the structure chart and one or two subordinate modules of the root module. After the top-level 'skeleton' has been tested, the modules that are at the immediately lower layer of the 'skeleton' are combined with it and tested. Top-down integration testing approach requires the use of program stubs to simulate the effect of lower-level routines that are called by the routines under test. A pure top-down integration does not require any driver routines. An advantage of top-down integration testing is that it requires writing only stubs, and stubs are simpler to write compared to drivers. A disadvantage of the top-down integration testing approach is that in the absence of lower-level routines, it becomes difficult to exercise the top-level routines in the desired manner since the lower-level routines usually perform input/output (I/O) operations.


Mixed approach to integration testing:


The mixed (also called sandwiched) integration testing follows a combination of top-down and bottom-up testing approaches. In a top-down approach, testing can start only after the top-level modules have been coded and unit tested. Similarly, bottom-up testing can start only after the bottom level modules are ready. The mixed approach overcomes.
this shortcoming of the top-down and bottom-up approaches. In the mixed testing approach, testing can start as and when modules become available after unit testing. Therefore, this is one of the most commonly used integration testing approaches. In this approach, both stubs and drivers are required to be designed


 Phased versus Incremental Integration Testing:


Big-bang integration testing is carried out in a single step of integration. In contrast, in the other strategies, integration is carried out over several steps. In these later strategies, modules can be integrated either in a phased or incremental manner. A comparison of these two strategies is as follows:









Use case-based testing Scenario coverage: Each use case typically consists of a mainline scenario and several alternate scenarios. For each use case, the mainline and all alternate sequences are tested to check if any errors show up.


Class diagram-based testing Testing derived classes:


 All derived classes of the base class have to be instantiated and tested. In addition to testing the new methods defined in the derived class, the inherited methods must be retested. Association testing: All association relations are tested.

Aggregation testing:


Various aggregate objects are created and tested. Sequence diagram-based testing


Method coverage:


All methods depicted in the sequence diagrams are covered. Message path coverage: All message paths that can be constructed from the sequence diagrams are covered.
10.11.5 Integration Testing of Object-oriented Programs There are two main approaches to integration testing of object-oriented programs:
Thread-based - Use based Thread-based approach: In this approach, all classes that need to collaborate to realize the behavior of a single-use case are integrated and tested. After all the required classes for a use case are integrated and tested, another use case is taken up and other classes (if any) necessary for the execution of the second use case to run are integrated and tested. This is continued until all use cases have been considered. Use-based approach: User-based integration begins by testing classes that either need no service from other classes or need services from at most a few other classes. After these classes have been integrated and tested, classes that use the services from the already integrated classes are integrated and tested. This is continued till all the classes have been integrated and tested.








 SYSTEM TESTING:


After all the units of a program have been integrated together and tested, system testing is taken up.
The system testing procedures are the same for both object-oriented and procedural programs, since system test cases are designed solely based on the SRS document and the actual implementation procedural or object-oriented) is immaterial.


There are essentially three main kinds of system testing depending on who carries out testing:

1. Alpha Testing:

Alpha testing refers to the system testing carried out by the test team within the developing organization.


2. Beta Testing:

Beta testing is the system testing performed by a select group of friendly customers.


3. Acceptance Testing:

Acceptance testing is the system testing performed by the customer to determine whether to accept the delivery of the system.


In each of the above types of system tests, the test cases can be the same, but the difference is with respect to who designs test cases and carries out testing. The system test cases can be classified into
functionality and performance test cases.


Before a fully integrated system is accepted for system testing, smoke testing is performed. Smoke testing is done to check whether at least the main functionalities of the software are working properly. Unless the software is stable and at least the main functionalities are working

satisfactorily, system testing is not undertaken. The functionality tests are designed to check whether the
software satisfies the functional requirements as documented in the SRS document. The performance tests, on the other hand, test the conformance of the system with the non-functional requirements of the system. We have already discussed how to design the functionality test cases by using a black-box approach (in Section

in the context of unit testing). So, in the following subsection, we discuss only smoke and performance testing.



Smoke Testing:

Smoke testing is carried out before initiating system testing to ensure that system testing would be meaningful, or whether many parts of the software would fail. The idea behind smoke testing is that if the integrated program cannot pass even the basic tests, it is not ready for a testing. For smoke testing, a few test cases are designed to check whether the basic functionalities are working. For example, for a library the automation system, the smoke tests may check whether books can be created and deleted, whether member records can be created and deleted, and whether books can be loaned and returned.



Performance Testing :

Performance testing is an important type of system testing. There are several types of performance testing corresponding to various types of non-functional requirements. For a specific system, the types of performance testing to be carried out on a system depends on the different non-functional requirements of the system documented in its SRS document. All performance tests can be considered as black-box tests.


Stress testing:

 Stress testing is also known as endurance testing. Stress testing evaluates system performance when it is stressed for short periods of time. Stress tests are black-box tests which are designed to impose a range of abnormal and even illegal input conditions so as to stress the capabilities of the software. Input data volume, input data rate, processing time, utilization of memory, etc., are tested beyond the designed capacity. For example, suppose an operating system is supposed to support fifteen concurrent transactions, then the system is stressed by attempting to initiate fifteen or more transactions simultaneously. A real-time system might be tested to determine the effect of simultaneous arrival of several high-priority interrupts.
Stress testing is especially important for systems that under normal circumstances operate below their maximum capacity but maybe severely stressed at some peak demand hours. For example, if the corresponding the non-functional requirement states that the response time should not be more than twenty secs per transaction when sixty concurrent users are working, then during stress testing, the response time is checked with exactly sixty users working simultaneously.


Volume testing :


Volume testing checks whether the data structures (buffers, arrays, queues, stacks, etc.) have been designed to successfully handle extraordinary situations. For example, the volume testing for a compiler might be to check whether the symbol table overflows when a very large program is compiled.


Configuration testing:


Configuration testing is used to test system behavior in various hardware and software configurations specified in the requirements. Sometimes systems are built to work in different configurations for different users. For instance, a minimal the system might be required to serve a single user, and other extended configurations may be required to serve additional users during configuration testing. The system is configured in each of the required configurations and depending on the specific customer requirements, it is checked if the system behaves correctly in all required configurations.



Compatibility testing:

 This type of testing is required when the system interfaces with external systems (e.g., databases, servers, etc.). Compatibility aims to check whether the interfaces with the external systems are performing as required. For instance, if the system needs to communicate with a large database system to retrieve information, compatibility testing is required to test the speed and accuracy of data retrieval.


Regression testing :

This type of testing is required when a software is maintained to fix some bugs or enhance functionality, performance, etc. Regression testing is also discussed in Section 10.13.


Recovery testing :

Recovery testing tests the response of the system to the presence of faults, or loss of power, devices, services, data, etc. The system is subjected to the loss of the mentioned resources '(as discussed in the SRS document) and it is checked if the system recovers satisfactorily. Foi example, the printer can be disconnected to check if the system hangs. Or, the power may be shut down to check the extent of data loss and corruption.



Maintenance testing :

This addresses testing the diagnostic programs, and other procedures that are required to help the maintenance of the system. It is verified that the artifacts exist and they perform properly.




Documentation testing:


 It is checked whether the required user manual, maintenance manuals and technical manuals exist and are consistent. If the requirements specify the types of audience for which a specific manual should be designed, then the manual is checked for compliance of this requirement.

 Usability testing :

 Usability testing concerns checking the user interface to see if it meets all user requirements concerning the user interface. During usability testing, the display screens, messages, report formats, and other aspects relating to the user interface requirements are tested. A GUI being just being functionally correct is not enough. Therefore, the GUI has to be checked against the checklist we discussed in Sec. 9.5.6.

Security testing :


Security testing is essential for software that handles or processes confidential data that is to be guarded against pilfering. It needs to be tested whether the system is fool-proof from security attacks such as intrusion by hackers. Over the last few years, a large number of security testing techniques have been proposed, and these include password cracking, penetration testing, and attacks on specific ports, etc.


 Error Seeding:


 Sometimes customers specify the maximum number of residual errors that can be present in the delivered software. These requirements are often expressed in terms of maximum number of allowable errors per line of source code. The error seeding technique can be used to estimate the number of residual errors in software.
Error seeding, as the name implies, it involves seeding the code with some known errors. n other words, some artificial errors are introduced (seeded) into the program. The number


No comments:

Post a Comment