Inspired by Rob Akershoek and his image of The Evolving Digital Delivery Model I made a similar general chart to describe test automation progress as organizations evolve.
The chart below tries to depict how many organizations evolve with their test automation efforts.
It's not meant to be a guideline or roadmap, but something to guide discussions about test automation.
Insights for this chart are derived from numerous test automation assignments, many of which has been on strategic level. The same experiences have proven that there are a lot of different circumstances that would make this chart partially debatable. For example, if the system under test is a SaaS system and you still find it crucial enough for your business to implement test automation for it you would still likely find it problematic to version control your test automation code with the code for the system under test.
Another example could be if the goal of the test is to make sure business processes can be performed end-to-end through several systems, it could make sense using some method for identifying data rather than testing with constructed data.
For the most part this chart still makes sense. It's not something new or visionary, but hopefully a relevant documentation about how test automation usually progresses in an organization as it matures around the subject. The background of-course being the factors high-lighted by Rob Akershoek.
Many test automation initiatives derive from dedicated team testers, then the developers gradually become aware of test automation and take over claiming they can do the test automation as unit testing and developer integration testing. When the team realize the developers focus a bit too much on testing only the code, and getting rid of every obstacle that make tests fail (claiming it make them brittle), they realize a code centric test automation is not enough to assure business can use the system (it's easy to get distracted from that a system consist of way more than code even with IoC - Infrastructure as Code).
The test data process follows the general test approach, but with extended need for re-usable test data.
Some organizations even have full production copies of data, and for some types of tests (like batch job duration tests, or performance tests) it could be useful,
but for the most part it's a problem with too much data in the test environment. This drives the usage for small data-sets and for implementation of data factories
as an abstraction to enable the best test data provisioning for each test environment.
A test data factory uses the context to decide how to accomplish the desired test data, as described by its properties.
Initially the test automation generally is started manually. Unit tests generally are included in the build process quite quickly, but for GUI based testing the test duration often makes it feasable to run the tests nightly, but in the long run most tests are tuned to be fitted into the build pipeline.
It's easier doing test automation without having to setup data in external systems, or verifying data ending up in other systems. A full focus on the specific system under test is enabled through stubbing of interfaces for integrated systems. Some types of external systems are generally harder to stub away, like authentication services or other security features. Often the next maturity step is to mock those away, at least if the system under test is being built with dependency injection.
Test automations tend to start off as proof-of-concepts, and/or from testers not using any version control system. After a while the need for version control become apparent since they realize they need to be able to run the test automation both on latest development version and hot-fixes for current production version. When the test automation gain in trust and importance it usually end up alongside the application code.
The test code ending up in the same repository as the application code also means all team members have the test code on their machine, making the test automation a team effort and a responsibility for all team members.
Another example would be grass root test automation throughout the organization makes management confused and wanting to organize it with the use of Enterprise solution tools, only to realize after a few years that any multi-purpose test automation tool worth using is way to heavy for the CI/CD pipeline (by license cost, lead times and installation footprint). The natural progress being the use of light-weight and efficient open source tools - that now suite the organization well since they've accumulated enough knowledge and experiences from previous tool usage.
The chosen test approach to focus on depend on who's responsible for test automation. The natural progression mirroring the one for Responsibility.
In any attempt to describe a complex reality, as with test automation, you have to leave things out. For example, in the chart above the following categories could have been included but are left out.
The test automation purpose is of-course one of the most relevant aspects of test automation, but it has intentionally been left out. The reason is there are a lot of reasons for test automation initiatives, and many of these are valid. I've heard valid reasons like:
For a lot of reasons, it often makes sense to integrate the test automation to issue tracking systems, CI/CD-pipeline tools, test management systems, requirement systems, change process tracking tools, version control systems, and so forth.
The circumstances for when what type of integration is relevant depend on the tools used, the organizational setup and many other factors. Since it is such a complex matter it has been left out.
The communication culture is one of the most relevant things for the test automation readiness. However, there are no natural flow of maturity that correlates to the test automation effort in any foreseeable way.
It would be tempting to think that it's better with less documentation, but then again, the tests need to be documented in a way so that even a computer can execute them. There are mechanisms like Specification-by-example to enable "living documentation", but even these come with a lot of annoying limitations and are in no way any silver bullet. In most implementations I've seen it's merely an abstraction to help with test automation code structuring.
Many times, there are also compliance requirements that affect how test automation is implemented, and those rarely lead to less documentation - but many times a test automation can help automating the documentation process.
Test automation is a complex issue. Naturally there are a lot of relevant context dependent issues left out. For a more thorough walk-through of test automation concerns, please view: