Patterns will help you solve test automation issues. This blog is an introduction to a Wiki that collates system level test automation patterns to help with (common) automation problems. It is open to all in the test community to avail of the knowledge or to contribute more patterns.
Have you heard about Test Automation Patterns?
We are talking more than patterns for unit tests. Seretta Gamba and Dot Graham created the Test Automation Patterns Wiki that collates system level test automation patterns. There are patterns for management, process, design and execution, and the issues that can be solved applying them. Right now, there are some 70-80 patterns and about 50-60 issues. They included a diagnostic functionality that lets you easily find the issue or issues that are bothering you and then the issue suggests what patterns to apply. The test automation patterns follow a rule of putting information in only one place. You’ll also notice a convention in the wiki: the patterns are written in capital letters and the issues in italic capitals to tell them apart. This way you can use their names almost as words in a kind of meta-language, a pattern language!
Test automation patterns are different from the unit test patterns because:
They are not prescriptive.
Unit-test or software design patterns take a development problem and give you exactly the code to solve it. You can implement it just as is and at the most you have to translate it into your own development language.
The system-level patterns are more generalised/vague.
Think for instance of management issues: companies can be so different in structure, size, hierarchy, etc that it would be impossible to give a single solution for every situation. That’s why many of the patterns just suggest what kind of solutions one should try out.
How does the Test Automation Patterns Wiki work?
Below is an example of you might use the wiki to help you improve or revive test automation. First, go to www.TestAutomationPatterns.org. Try following this example. When you want to improve or revive test automation, the first question will show as:
What below describes the most pressing problem you have to tackle at the moment?
- Lack of support (from management, testers, developers etc.).
- Lack of resources (staff, software, hardware, time etc.).
- Lack of direction (what to automate, which automation architecture to implement etc.).
- Lack of specific knowledge (how to test the Software Under Test (SUT), use a tool, write maintainable automation etc.).
- Management expectations for automation not met (Return on Investment (ROI), behind schedule, etc.).
- Expectations for automated test execution not met (scripts unreliable or too slow, tests cannot run unattended, etc.).
- Maintenance expectations not met (undocumented data or scripts, no revision control, etc.).
When several of the options may be an issue, the best strategy is to start with the topic that hurts the most. In this example, we will opt for number 2 because to be able to work efficiently you need time and resources to develop the framework. Once you click on your chosen link, a second question will appear. Second question (for “Lack of resources”)
Please select what you think is the main reason for the lack of resources:
For the following items look up the issue AD-HOC AUTOMATION:
- Expenses for test automation resources have not been budgeted.
- There are not enough machines on which to run automation.
- The databases for automation must be shared with development or testing.
- There are not enough tool licences.
More reasons for lack of resources:
- No time has been planned for automation or it is not sufficient. Look up the issue SCHEDULE SLIP for help in this case.
- Training for automation has not been planned for. See the issue LIMITED EXPERIENCE for good tips.
- Nobody has been assigned to do automation, it gets done “on the side” when one has time to spare. The issue INADEQUATE TEAM should give you useful suggestions on how to solve this problem.
If you feel the choices offered do not fully represent the issue, you can initially select one to see the next suggested steps, and if it doesn’t fit, you can go back a step. For this example, selecting the issue SCHEDULE SLIP suggests the following.
SCHEDULE SLIP (Management Issue) Examples:
- Test automation is done only in people’s spare time.
- Team members are working on concurrent tasks which take priority over automation tasks.
- Schedules for what automation can be done were too optimistic.
- Necessary software or hardware is not available on time or has to be shared with other projects.
- The planned schedule was not realistic.
Alternatively selecting number 1, ‘Lack of support: from management, testers, developers, etc.’ will bring up another question. Second question (for “Lack of support”)
What kind of support are you missing?
If you are lacking support in a specific way, one of the following may give you ideas:
- Managers say that they support you, but back off when you need their support. Probably managers don’t see the value of test automation and thus give it a lower priority than for instance going to market sooner.
- Testers don’t help the automation team.
- Developers don’t help the automation team.
- Specialists don’t help the automation team with special automation problems (Databases, networks etc.).
- Nobody helps new automators.
- Management expected a “silver bullet” or magic: managers think that after they bought an expensive tool, they don’t need to invest in anything else. See the issue UNREALISTIC EXPECTATIONS.
If you are having general problems with lack of support in many areas, the issue INADEQUATE SUPPORT may help. If having difficulty choosing, always select the answer that is most pressing now. This may be option number 2, ‘Testers don’t help the automation team’ which will bring up a Third question (for “Testers don’t help the automation team”)
Please select what you think is the main reason for the lack of support from testers:
- Testers think that the Software Under Test (SUT) is so complex that it’s impossible to automate, so why try.
- Testers don’t have time because they have an enormous load of manual tests to execute.
- Testers think that the SUT is still too unstable to automate and so don’t want to waste their time. Take a look at the issue TOO EARLY AUTOMATION.
- Testers don’t understand that automation can also ease their work with manual tests. The issue INADEQUATE COMMUNICATION will show you what patterns can help you in this case.
- Testers have been burned before with test automation and don’t want to repeat the experience. Look up issue UNMOTIVATED TEAM for help here.
- Testers do see the value of automation, but don’t want to have anything to do with it. Your issue is probably NON-TECHNICAL TESTERS.
- Supporting automation is not in the test plan and so testers won’t do it. Check the issue AD-HOC AUTOMATION for suggestions.
Should you think option 6 is the most suitable, simply click on NON-TECHNICAL TESTERS.
NON-TECHNICAL TESTERS (Process Issue) Examples
- Testers are interested in testing and not all testers want to learn the scripting languages of different automation tools. On the other hand, automators aren’t necessarily well acquainted with the application, so there are often communication problems.
- Testers can prepare test cases from the requirements and can therefore start even before the application has been developed. Automators must usually wait for at least a rudimentary GUI or API.
Resolving Patterns
Most recommended:
- DOMAIN-DRIVEN TESTING: Apply this pattern to get rid of this issue for sure. It helps you find the best architecture when the testers cannot also be automators.
- OBJECT MAP: This pattern is useful even if you don’t implement DOMAIN-DRIVEN TESTING because it forces the development of more readable scripts.
Other useful patterns:
- KEYWORD-DRIVEN TESTING: This pattern is widely used already, so it will be not only easy to apply for your testers, but you will also find it easier to find automators able to implement it.
- SHARE INFORMATION: If you have issues like Example 1. this is the pattern for you!
- TEST AUTOMATION FRAMEWORK: If you plan to implement DOMAIN-DRIVEN TESTING you will need this pattern too. Even if you don’t, this pattern can make it easier for testers to use and help implement the automation.
What is the difference between the most recommended and the other useful patterns?
You should always look first at the most recommended patterns, but if for some reason you cannot apply them, then you should at least apply one or more of the useful ones. Here the most recommended pattern is DOMAIN-DRIVEN TESTING.
DOMAIN-DRIVEN TESTING (Design Pattern)
Description – Testers develop a simple domain-specific language to write their automated test cases with. Practically this means that actions particular to the domain are described by appropriate commands, each with a number of required parameters. As an example, let’s imagine that we want to insert a new customer into our system. The domain-command will look something like this:
New_Customer (FirstName, LastName, HouseNo, Street, ZipCode, City, State)
Now testers only have to call New_Customer and provide the relevant data for a customer to be inserted. Once the language has been specified, testers can start writing test cases even before the System under Test (SUT) has actually been implemented.
Implementation – To implement a domain-specific language, scripts or libraries must be written for all the desired domain-commands. This is usually done with a TEST AUTOMATION FRAMEWORK that supports ABSTRACTION LEVELS.
There are both advantages and disadvantages to this solution. The greatest advantage is that testers who are not very adept with the tools can write and maintain automated test cases. The downside is that you need developers or test automation engineers to implement the commands so that testers are completely dependent on their “good will”. Another negative point is that the domain libraries may be implemented in the script language of the tool, so that to change the tool may mean to have to start again from scratch (TOOL DEPENDENCY). This can be mitigated to some extent using ABSTRACTION LEVELS.
KEYWORD-DRIVEN TESTING is a good choice for implementing a domain-specific language: Keyword = Domain-Command. Potential problems – It does take time and effort to develop a good domain-driven automated testing infrastructure.
The above suggests, the pattern you need is a TEST AUTOMATION FRAMEWORK!
TEST AUTOMATION FRAMEWORK (Design Pattern)
Description – Using or building a test automation framework helps solve a number of technical problems in test automation. A framework is an implementation of at least part of a testware architecture.
Implementation – Test automation frameworks are included in many of the newer vendor tools. If your tools don’t provide a support framework, you may have to implement one yourself. Actually, it is often better to design your own TESTWARE ARCHITECTURE, rather than adopt the tool’s way of organising things – this will tie you to that particular tool, and you may want your automated tests to be run one day using a different tool or on a different device or platform. If you design your own framework, you can keep the tool-specific things to a minimum, so when (not if) you need to change tools, or when the tool itself changes, you minimise the amount of work you need to do to get your tests up and running again.
The whole team, developers, testers, and automators, should come up with the requirements for the test automation framework, and choose by consensus. If you are comparing two frameworks (or tools) use SIDE-BY-SIDE to find the best fit for your situation.
A test automation framework should offer at least some of the following features:
- Support ABSTRACTION LEVELS.
- Support use of DEFAULT DATA.
- Support writing tests.
- Compile usage information.
- Manage running the tests, including when tests don’t complete normally.
- Report test results.
You will have to have MANAGEMENT SUPPORT to get the resources you will need, especially developer time if you have to implement the framework in-house. This pattern tells you what a framework should do, but you may have plenty of ideas yourself. If you’d like suggestions about how to do it, you can delve into the detail of the other patterns, where you will find more advice, particularly things like ABSTRACTION LEVELS and TESTWARE ARCHITECTURE.
However, just having a good technical framework isn’t always going to work, especially if the people seem to have no time for progressing the automation! Therefore this pattern also suggests getting MANAGEMENT SUPPORT.
MANAGEMENT SUPPORT (Management Pattern)
Description – Many issues can only be solved with good management support. When you are starting (or re-starting) test automation, you need to show managers that the investment in automation (not just in the tools) has a good potential to give real and lasting benefits to the organisation.
Implementation – Some suggestions when starting (or re-starting) test automation:
- Make sure to SET CLEAR GOALS. Either review existing goals for automation or meet with managers to ensure that their expectations are realistic and adequately resourced and funded.
- Build a convincing TEST AUTOMATION BUSINESS CASE. Test automation can be quite expensive and requires, especially at the beginning, a lot of effort.
- A good way to convince management is to DO A PILOT. In this way they can actually “touch” the advantages of test automation and it will be much easier to win them over.
- Another advantage is that it is much easier to SELL THE BENEFITS of a limited pilot than of a full test automation project. After your pilot has been successful, you will have a much better starting position to obtain support for what you actually intend to implement.
If you don’t know what their goals are for automation, you can try to find out about that but challenging a customer’s automation goals probably isn’t the best way to get the help you need! Essentially, that is going in and telling them they’ve done it all wrong which isn’t the best way to build new relationships. Therefore, building a business case isn’t relevant here, but DO A PILOT would be a good choice.
DO A PILOT (Management Pattern)
Context – This pattern is useful when you start an automation project from scratch, but it can also be very useful when trying to find the reasons your automation effort is not as successful as you expected.
Description – You start a pilot project to explore how to best automate tests on your application. The advantage of such a pilot is that it is time boxed and limited in scope, so that you can concentrate in finding out what the problems are and how to solve them. In a pilot project nobody will expect that you automate a lot of tests, but that you find out what are the best tools for your application, the best design strategy and so on.
You can also deal with problems that occur and will affect everyone doing automation and solve them in a standard way before rolling out automation practices more widely. You will gain confidence in your approach to automation. Alternatively, you may discover that something doesn’t work as well as you thought, so you find a better way – this is good to do as early as possible! Tom Gilb says: “If you are going to have a disaster, have it on a small scale”!
Implementation – Here some suggestions and additional patterns to help:
- First of all, SET CLEAR GOALS: with the pilot project you should achieve one or more of the following goals:
- Prove that automation works on your application.
- Chose a test automation architecture.
- Select one or more tools.
- Define a set of standards.
- Show that test automation delivers a good return on investment.
- Show what test automation can deliver and what it cannot deliver.
- Get experience with the application and the tools.
- Try out different tools in order to select the RIGHT TOOLS that fit best for your SUT, but if possible PREFER FAMILIAR SOLUTIONS because you will be able to benefit from available know-how from the very beginning.
- Do not be afraid to MIX APPROACHES.
- AUTOMATION ROLES: see that you get the people with the necessary skills right from the beginning.
- TAKE SMALL STEPS, for instance start by automating a STEEL THREAD: in this way you can get a good feeling about what kind of problems you will be facing, for instance check if you have TESTABLE SOFTWARE.
- Take time for debriefing when you are through and don’t forget to LEARN FROM MISTAKES.
- In order to get fast feedback, adopt SHORT ITERATIONS.
What kind of areas are explored in a pilot? This is the ideal opportunity to try out different ways of doing things, to determine what works best for you. These three areas are very important:
- Building new automated tests. Try different ways to build tests, using different scripting techniques (DATA-DRIVEN TESTING or KEYWORD-DRIVEN TESTING). Experiment with different ways of organising the tests, i.e. different types of TESTWARE ARCHITECTURE. Find out how to most efficiently interface from your structure and architecture to the tool you are using. Take 10 or 20 stable tests and automate them in different ways, keeping track of the effort needed.
- Maintenance of automated tests. When the application changes, the automated tests will be affected. How easy will it be to cope with those changes? If your automation is not well structured, with a good TESTWARE ARCHITECTURE, then even minor changes in the application can result in a disproportionate amount of maintenance to the automated tests – this is what often “kills” an automation effort! It is important in the pilot to experiment with different ways to build the tests in order to minimise later maintenance. Putting into practice GOOD PROGRAMMING PRACTICES and a GOOD DEVELOPMENT PROCESS are key to success. In the pilot, use different versions of the application – build the tests for one version, and then run them on a different version, and measure how much effort it takes to update the tests. Plan your automation to cope the best with application changes that are most likely to occur.
- Failure analysis. When tests fail, they need to be analysed, and this requires human effort. In the pilot, experiment with how the failure information will be made available for the people who need to figure out what happened. What you want to have are EASY TO DEBUG FAILURES. A very important area to address here is how the automation will cope with common problems that may affect many tests. This would be a good time to put in place standard error-handling that every test can call on.
Potential problems – Trying to do too much: Don’t bite off more than you can chew – if you have too many goals you will have problems achieving them all. Worthless experiment: Do the pilot on something that is worth automating, but not on the critical path. Under-resourcing the pilot: Make sure that the people involved in the pilot are available when needed – managers need to understand that this is “real work”!
If you’d like to learn more about how to use the Test Automation Patterns Wiki, check it out directly or look up A Journey Through Test Automation Patterns book by Seretta Gamba & Dot Graham on Amazon. The book follows the story of Liz, a test automator, who thought she had just got the best job in the world. Through different experiences of her team, they learn to solve their problems (most of them) through using the test automation patterns. As you get to know the people on the team, you see how the patterns have helped to improve their automation, using very realistic examples.