How to Verify Systems Designed in Business Analysis
Verification is what most people think of when they hear the word testing — it’s the process of testing whether a business analysis solution does what it’s designed to do.
During verification, the testing team (which may consist of developers, quality assurance [QA] people, and some business analysts [BAs]) put the software through its paces to both confirm that it operates as expected and ensure that it conforms to the design specifications laid out earlier in the project.
Verification testing includes four phases — one pretest phase and three phases of actual testing.
Also called a build verification test, a smoke test is a pretest that determines whether full testing can even begin in the first place. It reveals any simple failures in the solution that may prevent you from executing the tests in the next three phases. Some project teams may link this test to unit testing.
The unit test is the first actual phase of testing. It involves testing each unit of the system as a stand-alone test. The development team generally performs line-by-line testing of both function and structure to find bugs within the unit before any other tests are done.
Although unit tests are performed by the development team, you should have another group test in order to ensure unbiased testing.
The second phase of testing, the integration test, ensures the individual units can actually work together. These individual units working together can be considered a subsystem or just linked units. The objective of this test is to find problems with how the components of the system work together It tests the validity of the software architecture design.
The development team generally performs the integration test, although BAs may help by providing test cases and reviewing test results.
Keep the following in mind about integration testing:
Units aren’t included in integration testing until they’ve successfully passed unit testing.
Sometimes integration tests can have multiple levels of integration. That is, sometimes several subsystems are brought together and tested, and then those sub-systems are integrated with larger sub-systems.
This test is the testing phase you’re most involved in as a BA. The objective of the system test is to find problems with how the system meets the users’ needs. You run this test through the entire built system from end to end, auditing all units and integrations from a linear perspective.
The system test is the last chance for you and the project team to verify the product before it gets turned over to the users for a user acceptance test. It also confirms whether the software meets the original requirements, answering the “Did we build it right?” question.
Requirements validation test
This test verifies the system logic to ensure it supports the analysis requirements. Even though this work seems like it should be part of validation, you’re actually verifying whether you built your system according to what your requirements dictate.
This test is basically a retest (regression refers to going backward). You use this test to ensure that the changes you made to the system as part of your solution don’t break what was already working. Regression usually impacts more than one program and requires more than one test.
When thinking about regression tests, you need to know what applications are impacted by the solution so you can test those applications to make sure nothing has changed. This point is where a traceability matrix can come in handy.
In a dynamic test, you test the software to see how it performs when run under different circumstances and check the physical response from the system as those variables change with time. This test term is linked with three different types of tests:
Performance test: This test measures how fast the system can complete a function. To determine whether the test passes or doesn’t pass, refer to the nonfunctional requirements in your documentation that states what the response time should be.
If you have only 3 users, you probably can do this test manually; however, if you have to ensure that 2,500 users can be logged in at the same time, you’re probably going to have to use an automated tool to load the system with the number of users.
Volume test: This test checks high-volume transactions to verify the software can handle all growth projections.
Security testing ensures that unauthorized users can’t gain access to confidential data. It also certifies that authorized users can effectively complete their tasks. A good diagram to determine which users can perform which functions is a use case diagram or a security matrix (a diagram that shows which users may access which functions).
This test makes sure the software installs on the machine as you expect it to with no problems in the installation process. When testing, make sure the requirements for the system you’re installing on are stated.
This test determines how well the product works with different environmental configurations. For example, if your requirements state the product requires a PC or Mac with Internet Explorer’s latest version or Safari, you need to test installation with both operating systems (OS) and with the configuration of the browsers on both those systems.
A usability test is really a validation test; however, it’s sometimes done during system test time. If it’s a website that millions of customers will use or see, chances are you want to bring in usability engineers to build in usability instead of waiting to test it at the end of the project.
Although your project may not be a multimillion dollar release, you still need to ensure that users will be able to effectively use it.