Responsibility
Many test automation initiatives derive from dedicated team testers,
then the developers gradually become aware of test automation and take over claiming they can do the test automation as unit testing and developer integration testing.
When the team realize the developers focus a bit too much on testing only the code, and getting rid of every obstacle that make tests fail (claiming it make them brittle),
they realize a code centric test automation is not enough to assure business can use the system (it's easy to get distracted from that a system consist of way more than code even with IoC - Infrastructure as Code).
Test data
The test data process follows the general test approach, but with extended need for re-usable test data.
Some organizations even have full production copies of data, and for some types of tests (like batch job duration tests, or performance tests) it could be useful,
but for the most part it's a problem with too much data in the test environment. This drives the usage for small data-sets and for implementation of data factories
as an abstraction to enable the best test data provisioning for each test environment.
A test data factory uses the context to decide how to accomplish the desired test data, as described by its properties.
Execution
Initially the test automation generally is started manually. Unit tests generally are included in the build process quite quickly,
but for GUI based testing the test duration often makes it feasable to run the tests nightly, but in the long run most tests are tuned to be fitted into the build pipeline.
Comparison of execution frequency and test scope for each iteration for manual vs automated testing
De-coupling
It's easier doing test automation without having to setup data in external systems, or verifying data ending up in other systems. A full focus on the specific system under test is enabled through stubbing of interfaces for integrated systems. Some types of external systems are generally harder to stub away, like authentication services or other security features. Often the next maturity step is to mock those away, at least if the system under test is being built with dependency injection.
Script management
Test automations tend to start off as proof-of-concepts, and/or from testers not using any version control system. After a while the need for version control become apparent since they realize they need to be able to run the test automation both on latest development version and hot-fixes for current production version. When the test automation gain in trust and importance it usually end up alongside the application code.
The test code ending up in the same repository as the application code also means all team members have the test code on their machine, making the test automation a team effort and a responsibility for all team members.
Tool type
Another example would be grass root test automation throughout the organization makes management confused and wanting to organize it with the use of Enterprise solution tools,
only to realize after a few years that any multi-purpose test automation tool worth using is way to heavy for the CI/CD pipeline (by license cost, lead times and installation footprint).
The natural progress being the use of light-weight and efficient open source tools - that now suite the organization well since they've accumulated enough knowledge and experiences from previous tool usage.
Test approach
The chosen test approach to focus on depend on who's responsible for test automation. The natural progression mirroring the one for Responsibility.
Comparison of test purpose and duration for different testing approaches (the numbers feel a bit off nowadays, but the visualization is valid)