Five step software development process

The Thing About Testing – Part II

In my previous blog post entitled The Thing About Testing, I detailed my experiences as a Professional Services Consultant and the many different scenarios of trying to implement business application solutions. Today, I’d like to share three stories of how developers can determine what levels of testing is needed to ascertain that their “work” or project is deemed complete.

Example 1: Machine-to-machine or trading partner integration requirement

Entity A needs to interface with Entity B to send orders to the company’s warehouse for fulfillment, using an agreed protocol with an agreed format, say XML over secured HTTP pipe. Integration is to be automated, occur in real-time and no user interface required. The programming objective is to create a functional program that will, upon invocation, create an order fulfillment request transaction.  The fulfillment request transaction must be sent to Entity B over a secured transport. And finally, the solution must confirm the success of the sent transaction.
What needs to be tested to ensure this all works?

a. Unit testing:

Run a test program to validate that the program can be invoked with the correct parameters, validating what should occur when incorrect parameters are passed. Confirm that the expected fields are being read or populated, and validate what should occur with blank or incorrect data. Validate creation of the request transaction.

b. Integration testing:

Validate what should be done at any point of failure in the connection. Integration projects should always consider error handling for connectivity failure.  Should there be retries, if so, how many? What should be done when retries reach limit, notifications sent, message relays?  Validate that appropriate responses are done under various conditions and that it is clear when transactions are successful or not.

c. System testing:

Validate that the transaction functionally runs end-to-end (from start to finish) based on expected results. Is there a need for stress testing?  Has consideration been factored for scenarios where the transaction may exceed expected limits Does the number of expected transactions require performance tuning and possible load balancing? Is there a necessity for security testing? Are any points in the process potentially vulnerable to external attacks?

d. User Acceptance testing:

This stage of testing should be using true data in a UAT (User Acceptance Testing) type environment. Testing done by the intended audience should validate that the requirements are being met.

The quality of testing has a direct correlation to the quality of program specifications.  Example 1 shows that the requirement specifications will largely affect what type of testing and what level of rigor will be required.  If the factors of expected thresholds, conditions for error handling and expectations of volume are poorly defined at the start of the project, it will be hard to determine the level of project and testing completeness. Business sponsors will also find it hard to define estimates for project effort, cost and duration.

Five step software development process

Example 2: Modification to current B2B Ecommerce website to show additional special calculated user pricing in the display of the product catalog list – based on a selected product category criteria.

Users of the site are required to login with an ID and password  to be able to access the site. User interface is across multiple desktop and tablet browsers. What testing is required in this scenario?

a. Unit testing:

Run independent test (server-side) programs to validate that the product and special pricing data calculations match the expected values for the specific user.  Run the same tests across multiple users and scenarios to validate expected price calculation within the expected decimal points.  Validate that the web page displays the same expected results for the same input criteria as the server-side program.

b. Integration testing:

Run regression tests to ensure that all previous functional components are not impacted by the modification.  Check that links to/from the page still behave as expected and appropriately pass relevant variables. Validate error handling for invalid user-ids, failing login routines and missing price calculation formulas. Check for any overflow conditions.

c. System testing:

Validate that the website can perform the same expected functionality from end-to-end.  Perform compatibility checks to ensure that the application behaves as required across different target browsers and devices. Test that performance expectations are met and monitor whether the modification has an impact on performance. Security testing should validate user authentication and authority, as well as confirm that security policies are operating as expected.

d. User Acceptance testing:

UAT or Quality Assurance testing should validate that the page functions as expected and is visually consistent with the specifications. A/B testing, typically done by marketing, may be used to determine if placement of information is optimized for best user response.

An application requiring user interaction needs to consider the human factor, the user experience.  I have often seen developers only test how they expect a user to use the application and not necessarily how a user may use it. Usability heuristics, whereby the user interface is designed following common guidelines, can be helpful as in anticipating user Interaction with the application.

Example 3: As part of a replacement and modernization of a company’s back-office financial system, month-end financial reports need to be tested.

For this scenario, the following testing factors need to be considered:

  • Validation of input parameters used to generate the reports
  • Running parallel tests to compare the old system reports with the new system reports
  • An understanding of acceptable variance allowances in case the old and new reports are not absolute matches, including degree of precision required in the computation say for, rounding and decimal point positions.
  • Validation across discrete calculations against the company’s old system, in case the old system has reporting have flaws
  • Use of scrubbed/true data to run the tests
  • Iterative testing should use the same segments of data to ensure consistency in the output
  • Consideration of and testing for exception scenarios that may occur
  • Performance testing, to compare/improve processing times based on agreed benchmarks
  • User Acceptance testing to validate visual consistency with other reports, or agreed standards, as well as intuitiveness of report presentation.
  • Validate that the report format(s) can be opened by the target users and display/print well on all relevant devices.

Projects involving replacement or redevelopment invariably involve a high focus on data comparison and preparation of the data, which may involve data conversion and migration.

There are some common practices that should be considered across all testing scenarios:

  • Ensure that the requirement and, hence the objective of the testing, is described in enough detail.
  • Start the test planning stage early on in the project. Not only will it help flag any requirements that may be vague, but it will also help identify what is to be coded, proven and validated.
  • Where applicable, have a user representative or business analyst define a matrix of use cases. This can then be used to establish the relevant test scripts, and also helps to qualify the sampling of data required to test the use case scenarios.
  • Test with scrubbed and true data that is meaningful, so that test comparisons are true. Using garbage data can produce unreliable results and disguise issues.
  • Ensure that the test environment has a clean baseline when starting. Also clear and reset the baseline where relevant, especially when running repeated tests. This will avoid misleading results due to incorrect data from previous test rounds.
  • Test with sufficient and distinct data samples across multiple scenarios. This will help avoid confusion when analyzing test comparisons across repeated cycles.
  • Separate the testing of server based logic from user interface logic. A reusable test wrapper/container program to test only server logic helps to ensure that the logic itself, such as validations, data retrieval and calculations, performs as expected. The reusable container should be able to accept the relevant input parameters and output the results in an appropriate format that validates the logic, and can be readily reviewed during testing.
  • Consider peer testing. By letting developers test each other’s work during development and unit testing, a good percentage of the user requirements can be validated. Also, peer testing often helps flush out the defects that QA should not have to find.
  • Test with extreme and abnormal data to ensure appropriate overflow and error handling.
  • Use a consistent format for defect tracking. This should be used by the testers to report issues, by the developers to respond or update and by project stakeholders to monitor. Whether the format is part of a commercial testing solution or a home-grown utility, consistency of where and how defects are reported and handled is crucial to project continuity.
  • If the hardware or configuration of the test environment is set up differently from production, consider approaches to enable equivalent testing

The principles listed above apply whether manual testing methods or automated testing tools are used. It is easy to fall into a cycle of exhaustive testing and it is often difficult to define how much testing is enough.   ‘Ad nauseam’ (till you get sick) testing is hardly ever financially feasible. There are always time and cost constraints regardless of what project methodology is being used.  The practical end point of the testing stage should be when it has been verified that the application meets the defined requirements, within the acceptance criteria as identified by the stake-holders. If during the testing phase it is determined that a requirement and its acceptance criteria is still too vague, then it may be necessary to re-evaluate, quantify and qualify the details, and then appropriately re-design the test plan necessary to prove the acceptance criteria.

In circumstances where business requirements cannot be sufficiently and clearly defined up front, then an alternate approach may be required, such as starting with proof of concept, prototype modelling/ development and other agile methodologies.  A model-office approach, a user collaboration testing style used in agile development, will often reveal requirements that were not sufficiently understood at the beginning.    Another agile method is test-driven development, where a test-first programming technique can assist with fine-tuning the details of the specifications.

In summary, the decision whether a software program or application is complete is not only about when the development and good testing has been done. It is also about the quality of the specifications and a common understanding/interpretation from the development team and the project sponsors. Vague specifications can lead to a vague understanding of the expectations set. During the project scope assessment and specifications stage, it is necessary to clarify any scope ambiguities as early as possible, identify assumptions and risks and be sure that objectives are clearly described as quantitatively as possible. This solid foundation will enable optimal test design and planning, and result in a unanimous agreement of the work being ‘done.’


LANSA Hybrid Low-Code solutions are fast to deploy and easy to maintain delivering outstanding value for any application development project. Ready to get started?




Recommended Posts