System Level Vs Statement Level Testing
Divide and conquer helps abstract a problem into smaller comprehensible bits. System/Architecture-based testing helps in design testing and does not require implementation as for unit, module testing at statement level.
This design pattern helps solve complex large scale software design and implementation problems.
Program modularity, abstraction, object orientated programming help simplify and encapsulate the complexity.
Although, as the size of software system increases, choice of data structures and algorithms become less important. Problems of systems integrations and interactions between disparate modules become challenging.
It is common practice to do isolated unit testing for each modules, leaving the full system level testing for later. AT&T labs report 60% of software bugs come from system level testing involving communications between all moving parts.
It is important to note then, a well unit tested code of any software system gives confidence at only statement level. In other words, unit testing cannot be a substitute for system testing.
This does not imply doing only system level testing, as it will be more expensive to debug, isolate and fix problems.
Unit and Module (statement) level testing do not focus on dependent components, interfaces, connections and configurations of software systems. As a result, unit testing metrics of code/statement coverage by test cases, gives a false positives when used for system level test cases.
There needs to be different metrics for both paradigms of testing.
Different forms of testing like unit, module, integration, subsystem, system and acceptance testing should be aimed to see if the software works, to find errors, or to check consistency etc.
- Unit and module testing validates the local behavior of individual software blocks.
- Integration testing validates individual block behaviors, and the interactions among blocks, contribute to the global system behavior of the system without regard to its decomposition.
- Subsystem testing refers to testing of coherent software subsystems before integrating into the complete system.
- System testing has the particular purpose to compare the software system to its original objectives, in particular validating whether the software meets the functional and non-functional requirements.
- Acceptance testing gets the user involved by asking if the user accepts the complete system.
Unit, module, integration test cases cannot/should not be used for system level testing.
This common practice of extrapolations is wrong as evident from the defects of these two classes of test cases.
The type of test data and flow will result in different types of interactions and communication at unit and system level. Although, testing for all test data and program flow paths is impossible.
When performing system testing, testers are concerned with workflows that solve a particular problem.
System-level tests derived from architectures can validate that the software implements the architecture correctly and help to verify the architecture.
The systems algorithms concerns itself with communication among software components, subsystems, functionality requirements and validating whether the software system can solve the problem and achieve desired states Or not.
Present system testing is ad-hoc, exploratory and lacks the preciseness of unit testing.
System testing is complex due to the large number of components and their interrelationships, consistent attributes that are across time and implementations.
System testing requires an understanding of software at architecture level. System architecture provides a software description that can be used for tests generation at the system level.
This enables testers to abstract away the unnecessary details and focus on the big
picture of the system: system structure, high-level communication protocols, the assignment of
software components and connectors to hardware components, development process etc.
For the system level tests, components/subsystems interactions itself is the first class citizen rather than units of code. Thus, object interactions at system level is utmost important than functionality of the defined objects and classes.
Defining test cases at architecture level can be done early in development cycle without waiting for implementation details.
Design testing and architecture-based testing need to be done early in the development cycle.
System testing will then reduce the cost of fixing defects earlier than just before release.
Most of the system and integration testing lack addressing these problems at architecture level.
Thus, we need to differentiate system level tests from architecture level testing.
An architecture-based system testing focuses on four main attributes as Component + Interface + Connector + Configurations. The component is module, process, procedure or variables and interface are objects of component and its environment.
For example, in a distributed system architecture, subsystems are components, and network protocols are connectors.
Architecture-based test cases are useful in quantitative analysis of invariant, deadlock detection, resource utilization, throughput rate, effect of failures etc.
Problems with current testing approach :
1). Agile Test teams spend a lot of time iterating over the same test cases on different builds.
2). Test teams extrapolate unit, module, integration test cases as system tests.
3). System testing is mainly ad-hoc, random, manual and exercise positive cases.
4). Architecture-based testing is hardly exercised.
5). Test teams wait around till the implementation is complete to start test cycles.
6). Shorter release cycle hardly gives time for product understanding for meaningful system testing.
7). Regression, automated testing is redundant and not aimed at finding faults.
8). Almost all testing efforts are aimed at validating good paths and not breaking the system.
1). Software Architecture Validation. AT&T Technical Journal
2). B. Beizer. Software Testing Techniques. Van Nostrand Reinhold
3). A. Bertolino, P. Inverardi, H. Muccini, and A. Rosetti, An Approach to Integration Testing Based on Architectural Descriptions – IEEE
4). L. A. Clarke. Improve Architectural Description Languages to Support Analysis Better.
5). P. G. Frankl and E. J. Weyuker, An Applicable Family of Data Flow Testing Criteria, IEEE