Testing software word cloud

The Thing about Testing

My experiences in Professional Services Consulting have exposed me to many different scenarios and perspectives of trying to implement business application solutions, both large and small scale. As the end user, the developer, the systems architect or the project manager the dilemma I am often faced with understanding is “when is something truly ready?”

When a developer says that they are “done” with a task, what does that really mean? I’ve encountered many varied responses to this question:

  • It works as you asked, but I haven’t quite finished this part.”
  • I think I’m done?”
  • I will be done when I’m finished.”
  • You test it, and you tell me!”

Establishing the point of completion is not about testing until the code is perfect and bug-free, but instead determining:

  • Are there clear specifications provided to define the expectations of the work?
  • Which acceptance criteria were defined to deem the work complete?
  • Can it be proven that the acceptance criteria have been met?

The next step is then to identify what testing is needed to ascertain that the work is truly “done” and ready for hand-off. There is a wide spectrum of testing layers that typically form part of any project methodology, including:

Unit Testing: conducted by a developer, less formal, discrete sections of code

Integration Testing: validates interfaces between functional units

System Testing: comprehensive testing of the entire system with respect to how it meets the requirements, user interface, performance and reliability

User Acceptance Testing: conducted by end users, the final testing prior to implementation

Within all these testing layers, there are different testing types, also referred to as testing objectives:

  • Regression: iterative testing to ensure changes have been successfully introduced and have not unintentionally affected other areas of the application
  • Functional: verification of each functional component
  • Peer: conducted by other developers on the team
  • Performance: quality of speed/response testing
  • Stress or Load: tests for stability of the system beyond normal volume limits
  • Boundary: tests for scenarios that are beyond expected values for upper and lower boundaries (e.g. testing data overflow)
  • Security: tests that monitor the protection of data and maintenance of functionality against internal and external threats
  • Authorization and Authentication: validates that only authorized/intended users have access to the program
  • Compatibility: typical in scenarios where there are different interfaces for the same application
  • Usability: evaluation of how end users might use the system
  • UX: A variant of usability testing that focuses on the user experience – typically for web interfaces
  • A/B (split testing): another variant of usability testing, typically used by marketing to find which variant of an output provides optimum results
  • Model Office: simulates the live system scenario using active user representatives

The applicability of which testing type to use depends on several factors, such as the technology, interface, deployment, and intended audience.  Further the circumstances of the state of the application, such as whether enhancements are being made to an application already in production, whether the application is a rewrite or modernization of an existing one, or if the application is a brand new system involving new technologies and deployment will also determine the testing needs.

I’ll continue this discussion next week, where my blog post will include three real-world examples that show how different factors need to be considered to use the most suitable tests. Stay tuned!


LANSA Hybrid Low-Code solutions are fast to deploy and easy to maintain delivering outstanding value for any application development project. Ready to get started?




Recommended Posts