WHITE BOX OR CLEAR BOX OR GLASS BOX OR OPEN BOX TESTING
1. White box testing is a way of testing the external functionality of the code by testing the program code that realizes the external functionality.
2. White box testing takes into account the program code, code structure and internal design flow.
3. Types: a) Static Testing b) Structural Testing
Static testing: Static testing is a type of testing which requires only the source code of the product, not the binaries or executables. It involves in Desk checking, code walkthrough, code inspection. In static testing product is tested by humans.
Structural testing: Structural testing takes into account the code, code structure, internal design, and how they are code. Tests are actually run by the computer on the built product.
Techniques:
1. Statement Coverage: Refers to writing test cases that execute all statements at least once.
2. Decision Coverage: It executes all decision direction at least once.
3. Condition Coverage: Execute each decision with all possible outcomes at least once.
4. Function Coverage: Identifies how many program functions are covered by test cases.
5. Cyclomatic complexity: It is a metric that quantifies the complexity of a program.
BLACK BOX TESTING
1. Black box testing focuses on testing the function of the program or application against its specification. Specifically, this technique determines whether combinations of inputs and operations produce expected results.
2. It is done without the knowledge of the internals of the system under test.
Techniques:
1. Positive and negative tests: +ve testing is done to verify the known test conditions and –ve testing is done to break the product with unknowns.
2. Boundary value analysis: A technique that consists of developing test cases and data that focuses on the input and output boundaries of a given function.
3. Decision table: DT is useful when input and output data can be expressed as Boolean conditions.
4. Equivalence partitioning: It is a technique for testing equivalence classes rather than undertaking exhaustive testing. It minimizes the no of test cases.
5. Compatibility testing: Testing done to ensure that the product features work consistently with different infrastructure components is called compatibility testing.
6. User documentation testing: It is done to ensure the documentation matches the product and vice versa.
7. Domain testing: To test the domain expertise rather than product specification.
INTEGRATION TESTING
1. Integration is defined as the set of interaction among components.
2. Testing the interaction between the interaction between the modules and interaction with other systems externally is called integration testing.
3. Integration testing starts when two of the product components are available and ends when all component interfaces have been tested.
4. Integration methods: Top down (clear requirement and design), Bottom up (changing requirement and design), Bi directional (stable design), Big bang (limited changes to existing architecture).
SYSTEM TESTING
1. The testing conducted on the complete integrated products.
2. Tests both the functional and non functional aspects of the product.
3. Functional testing involves testing a products functionality and features.
4. Non functional testing involves testing the products quality factors.
Non functional testing:
1. Performance/Load testing: To evaluate the response time to the system to perform required functions.
2. Scalability testing: Used to find out the maximum capability (limits) of the system.
3. Reliability testing: To evaluate the ability of the system to perform its required functions repeatedly for a specified period of time.
4. Stress testing: Evaluating a system beyond the limits of the specified requirements or system requirements to ensure the system does not break down unexpectedly.
5. Localization testing: Testing conducted to verify that the localized product works in different languages.
Functional testing technique:
1. Design verification
2. Business vertical testing
3. Deployment testing
4. Beta testing
5. Certification, standards, and testing for compliance
ACCEPTANCE TESTING
Acceptance testing is done by the customer or by the representative of the customer to check whether the product is ready for use in the real life environment.
REGRESSION TESTING
It is done to ensure that defect fixes made to the software works properly and does not affect the existing functionality.
RETESTING
Again testing the functionality of the system.
AD HOC TESTING
Testing done without using any formal testing technique is called ad hoc testing.
Techniques:
1. Monkey testing : Randomly test the products
2. Buddy testing: A developer and tester working as buddies to help each other on testing.
3. Pair testing: Testing done by two testers simultaneously.
4. Iterative testing: Deal with changing requirements.
5. Agile/extreme testing: Make frequent releases with customer involvement in product development.
USABILITY TESTING
Usability testing is for user friendliness. Factors that determine usability are ease of use, speed, pleasantness etc.
ACCESSIBILTIY TESTING
Testing that determines if software will be usable by people with disabilities.
SECURITY TESTING
Testing which confirms that the program can restrict access to authorized personnel and that the authorized personnel can access the functions available to their security level.
SMOKE TESTING
Smoke Testing is done at first when a build come for testing which test basic functionality of the product for further level of testing.
Smoke Testing is done at first when a build come for testing which test basic functionality of the product for further level of testing.
SANITY TESTING
Sanity Testing is done at last when a build going to release to client, to identify all the major functionality is working fine & its a subset of Reggression testing.
ALPHA TESTING
The alpha testing is conducted at the developer sites and in a controlled environment by the end user of the software.
BETA TESTING
Testing the application after the installation at the client place.