Test automation assessment concern area checklist

Version 1.4

Introduction

This is a checklist used to assess if a team/project/product is in a state ready for implementation of successfull test automation.

It consists of a number of areas where each of the area has the potential to hinder successful test automation.

The prime use of this checklist is a workshop with the following participants:

The intended outcome of a workshop like this is a mutual understanding of what needs to be done in order to get a successful test automation in place - preferably in with appointed responsible and deadline for each identified task.

Domain areas concerning test automation

Goal and expectation on test automation

What do you wish to achieve with the test automation? What are the expectations?Good backgrounds or goals for test automation include speed gain, team satisfaction, or refocus of effort. Bad ones include cost reduction.
What test level do you wish to implement the automation on? (System test? Unit test?)Test automation coulld be performed on any testing level. The bigger the system scope, the more effort is needed. Higher level tests checks business value. Lower level tests check code.

Regarding the system under test

Do the system have a GUI?Not all systems contain a GUI. Automating towards the GUI require a lot more maintenance than automating tests towards an API or unit test.
Is it an internally developed system, a pre-configured bought system, a service or a tailored standard system (or possibly something entirely else)?The mandade to incluence the test object is central to how beneficial the short feedback loop of test automation is, but also the efficiency of maintenance of the tests.
Is there a structured change process in place? For feature CRs and for deployment CRs?A structured change process could give specific opportunities for test automation goals, but it may also hinder testing. Regardless it's something to be aware of.
How are integrations and dependencies to surrounding systems managed in the target test environment? Does it exist mocks/stubs/stable test system with consistent data in sync with the system under test?Service virtualization, mocks, stubs, fakes, or actual connections to external systems need to be in place for some types of test.
What are the severity (consequence level) if errors are encountered in the system? What impact do they do? (some systems are more sensitive to errors than others)If the impact of problems in the system under test is low the test automation effort might be better spent elsewhere.
Does the object name of GUI elements in the system contain dynamic identifiers?If GUI level test automation is considered but the GUI elements are hard to identify the automation will require a lot more effort.
Are any third party components used to extend the capabilities of the standard libraries for the programming language? (Question only valid for GUI level testing)Sometimes proprietary libraries like Stingray grids could make automation really hard.
What base technologies are used in the system under test (For example Java, .NET, C#, WebSphere, TN3270, SOA, REST)?Automation often is highly dependent of the technological context.
Are there any suitable SOA/REST services, APIs or similar in place, that are suitable for test data management and/or testing?Testing APIs is a lot less maintenance prone than GUI level testing. APIs are also good for test data manipulation.
Are the external interfaces to the system documented? (Question valid for system level testing and above)Accurate documentation makes implementation a lot easier.
Does it exist an updated and relevant SAD (System Architecture Description) for the system?A good graps of the system makes test automation opportunities and obstacles clear.
To what extent are automated unit tests used for the system under test?Automated unit tests in place increases the likelihood that the code works and system level testing can concentrate on integrations, data variations and configuration.
Does the system under test exist in branched versions (for example country specific, customer specific, brand specific, language specific)The structure of the automated tests will look different if the same tests has to be run on multiple similar versions of the system.

Regarding the surrounding development situation

How often do releases/deployments to production occur?A system that only is updated a few times a year rarely benefit from test automation since the maintenance become to heavy.
Are team members eager or reluctant to test automation?If team members embrace test automation the maintenance and testability is secured. If not these aspects needs to be addressed.
In what phase of development/maintenance is the product now? Is it an old product or a new one?
How often do releases/deployments to the test environment occur?
Are any established development practice in use?
Is the system under test developed in increments?
Are any task/issue tracking system in use? Which one?
What programming language is the system developed in? What programming language do you expect the test automation to be in? Is it important that it is the same language?
How is the communication between testers and developers taking place today?

Regarding the test automation

Who is planned to be responsible for maintenance of the test automation solution once in place? Have you considered what needs to be in place for handover?
Do you have any specific concept in mind (ATDD/BDD/TDD/MBT or something completely different)?
Have you performed any proof-of-concepts with any tools? Do you already have a specific tool in mind?
Are there any failed automation attempts in the project history? Why did they fail?
Do you have a plan for backup and version management of the test scripts?
How do you plan to document the test automation solution?
How often do you run the test cases now planned for automation today, and when do you plan do run them once automated?
Do you have any ideas or thoughts around how to manage test data for the test automation and its environment (input data, oracle data, background data, meta-data)? What about data that is used up during the test?
Depending on the purpose of the test automation: How important is it with autonomously executable test suites, that requires more of error management routines?
Have you tried to calculate a ROI for the test automation? Would you say it's relevant to do that (depending on the purpose of the test automation)?
Are the idea to automate tests in the GUI, or on a level below the GUI (or both)? How often do the APIs/services change compared with the GUI, and how beneficial is the testing of each?

Regarding the testing situation

Are there any documented test cases ready to be automated? Are they concise enough for automation?
Are any of the test cases made for running in sequence after each other?
What is the error frequency for the regression tests today? Are found errors common or rare?
What's the status of the environment the automated tests are planned to run in?
How large proportion of the testing effort is test execution, compared to the time to manage the test environment, test data management, documentation, test case maintenance, studying changed functionality and so forth?
How many testers per developer are there currently in the project?
How many of the project staff has automation experience, and to what extent/what kind?
Do you use any test management tool? Which one? Would you see any benefits connecting this to the automation?
To what degree do you have control over the test data and the permissions for the test user accounts in the targeted test environment?

Regarding the test environment

Are there any complex dependencies in the data used in the test cases planned for automation?
How are deploys to the test environment initiated?


Notes