Test automation assessment concern area checklist

Version 1.5

Introduction

This is a checklist used to assess if a team/project/product is in a state ready for implementation of successfull test automation.

It consists of a number of areas where each of the area has the potential to hinder successful test automation.

The prime use of this checklist is a workshop with the following participants:

The intended outcome of a workshop like this is a mutual understanding of what needs to be done in order to get a successful test automation in place - preferably in with appointed responsible and deadline for each identified task.

Domain areas concerning test automation

Goal and expectation on test automation

What do you wish to achieve with the test automation? What are the expectations?Good backgrounds or goals for test automation include speed gain, team satisfaction, or refocus of effort. Bad ones include cost reduction.
What test level do you wish to implement the automation on? (System test? Unit test?)Test automation coulld be performed on any testing level. The bigger the system scope, the more effort is needed. Higher level tests checks business value. Lower level tests check code.

Regarding the system under test

Do the system have a GUI?Not all systems contain a GUI. Automating towards the GUI require a lot more maintenance than automating tests towards an API or unit test.
Is it an internally developed system, a pre-configured bought system, a service or a tailored standard system (or possibly something entirely else)?The mandade to incluence the test object is central to how beneficial the short feedback loop of test automation is, but also the efficiency of maintenance of the tests.
Is there a structured change process in place? For feature CRs and for deployment CRs?A structured change process could give specific opportunities for test automation goals, but it may also hinder testing. Regardless it's something to be aware of.
How are integrations and dependencies to surrounding systems managed in the target test environment? Does it exist mocks/stubs/stable test system with consistent data in sync with the system under test?Service virtualization, mocks, stubs, fakes, or actual connections to external systems need to be in place for some types of test.
What are the severity (consequence level) if errors are encountered in the system? What impact do they do? (some systems are more sensitive to errors than others)If the impact of problems in the system under test is low the test automation effort might be better spent elsewhere.
Does the object name of GUI elements in the system contain dynamic identifiers?If GUI level test automation is considered but the GUI elements are hard to identify the automation will require a lot more effort.
Are any third party components used to extend the capabilities of the standard libraries for the programming language? (Question only valid for GUI level testing)Sometimes proprietary libraries like Stingray grids could make automation really hard.
What base technologies are used in the system under test (For example Java, .NET, C#, WebSphere, TN3270, SOA, REST)?Automation often is highly dependent of the technological context.
Are there any suitable SOA/REST services, APIs or similar in place, that are suitable for test data management and/or testing?Testing APIs is a lot less maintenance prone than GUI level testing. APIs are also good for test data manipulation.
Are the external interfaces to the system documented? (Question valid for system level testing and above)Accurate documentation makes implementation a lot easier.
Does it exist an updated and relevant SAD (System Architecture Description) for the system?A good graps of the system makes test automation opportunities and obstacles clear.
To what extent are automated unit tests used for the system under test?Automated unit tests in place increases the likelihood that the code works and system level testing can concentrate on integrations, data variations and configuration.
Does the system under test exist in branched versions (for example country specific, customer specific, brand specific, language specific)The structure of the automated tests will look different if the same tests has to be run on multiple similar versions of the system.

Regarding the surrounding development situation

How often do releases/deployments to production occur?A system that only is updated a few times a year rarely benefit from test automation since the maintenance become to heavy.
Are team members eager or reluctant to test automation?If team members embrace test automation the maintenance and testability is secured. If not these aspects needs to be addressed.
In what phase of development/maintenance is the product now? Is it an old product or a new one?There are more changes to a new product, hence more maintenance to the test automation - but also more bug finding chanses.
How often do releases/deployments to the test environment occur?If deploys are rare proportionally much more time will be spent on maintenance for each test execution.
Are any established development practice in use?Depending on the development process or practice some implications on the test automation could appear.
Is the system under test developed in increments?In incremental implementation each part of the application is built stand-alone wich is easy for test automation, as compared to iterative approach where a wider part of the application could be manufactured concurrently.
Are any task/issue tracking system in use? Which one?Tools like Jira, or Azure DevOps, provide a lot of help for development teams. They could do this for test automation too, if integrated.
What programming language is the system developed in? What programming language do you expect the test automation to be in? Is it important that it is the same language?Knowledge sharing within the team is essential.
How is the communication between testers and developers taking place today?Tool based? Meetings? Daily conversation? Depending on the type of team this could be very different - which affect test automation.
Are there any legal constraints for tool usage? Sometimes regional restrictions apply, or preffered vendor, or data storage location, or certain open source licenses that are avoided.

Regarding the test automation

Who is planned to be responsible for maintenance of the test automation solution once in place? Have you considered what needs to be in place for handover?You build test automations for ease of maintenance with regard to the target users for it to work.
Do you have any specific concept in mind (ATDD/BDD/TDD/MBT or something completely different)?Some teams already have a test automation idea they want to persue.
Have you performed any proof-of-concepts with any tools? Do you already have a specific tool in mind?Some teams already have a test automation idea they want to persue.
Are there any failed automation attempts in the project history? Why did they fail?Lessons learned from failed attempts are highly valuable for assessing new approaches.
Do you have a plan for backup and version management of the test scripts?Automated tests should also be under version control, and preferably the same management as the application code base.
How do you plan to document the test automation solution?Hand-overs, or introductions - or solution assessments by Security or others are very much easier if relevant documentation is at hand.
How often do you run the test cases now planned for automation today, and when do you plan do run them once automated?You often execute tests more often once automated, but if not you could count on increased maintenance for tests.
Do you have any ideas or thoughts around how to manage test data for the test automation and its environment (input data, oracle data, background data, meta-data)? What about data that is used up during the test?Test data provisioning is much more complex than actual test automation implementation and an approach for test data need to be developed.
Depending on the purpose of the test automation: How important is it with autonomously executable test suites, that requires more of error management routines?Test automation includedn in DevOps pipelines need to be able to run unsupervised and log enough for debugging. Non pipeline based execution could easier be manually supervised and could get away with less logging.
Have you tried to calculate a ROI (Return of Investment) for the test automation? Would you say it's relevant to do that (depending on the purpose of the test automation)?Some organizations require ROIs prior to implementation start.
Are the idea to automate tests in the GUI, or on a level below the GUI (or both)? How often do the APIs/services change compared with the GUI, and how beneficial is the testing of each?API level tests generally require way less maintenance, but sometimes the goal of the tests is checking End-2-End business availability.

Regarding the testing situation

Are there any documented test cases ready to be automated? Are they concise enough for automation?If the automation implementation has to start by redisigning all test cases more effort is needed.
Are any of the test cases made for running in sequence after each other?Tests that could be ran independently from eachother are easier to parallelize to speed up time and give fewer problems with false negatives.
What is the error frequency for the regression tests today? Are found errors common or rare?The value of performed tests are based on risk and value. If bugs are rarely found, but the functionality is essential automation could still be valuable.
What's the status of the environment the automated tests are planned to run in?Executing automated tests in a flaky/unstable environment only escalates frustration.
How large proportion of the testing effort is test execution, compared to the time to manage the test environment, test data management, documentation, test case maintenance, studying changed functionality and so forth?The only activity in QA that provide value is testing. If relatively little time is spent executing tests maybe other activities should be automated first.
How many testers per developer are there currently in the project?Someone should always do a manual run through the application, and if that person is stuck with test automation a bottleneck is created.
How many of the project staff has automation experience, and to what extent/what kind?A test automation that is a one-man show fades and dies quickly. It's essential responsibility is distributed in the team.
Do you use any test management tool? Which one? Would you see any benefits connecting this to the automation? If compliance or management control would benefit from collecting test automation execution results in a test management tool an integration could be useful.
To what degree do you have control over the test data and the permissions for the test user accounts in the targeted test environment? Test automation implementation rarely poses a problem, but the test data provisioning does. Do you have a test data strategy in place?

Regarding the test environment

Are there any complex dependencies in the data used in the test cases planned for automation? Test automation implementation rarely poses a problem, but the test data provisioning does. Do you have a test data strategy in place?
How are deploys to the test environment initiated? The more often deploys occur, the more times test automation execution could provide value.


Notes