• Skip to main content
Super Early Bird - Save 30% | Teams save up to 25%

AutomationSTAR

Test Automation Conference Europe

  • Programme
    • AutomationSTAR Team
    • 2025 programme
  • Attend
    • Why Attend
    • Volunteer
    • Location
    • Get approval
    • Bring your Team
    • 2025 Gallery
    • Testimonials
    • Community Hub
  • Exhibit
    • Partner Opportunities
    • Download EXPO Brochure
  • About Us
    • FAQ
    • Blog
    • Test Automation Patterns Wiki
    • Code of Conduct
    • Contact Us
  • Tickets

Aishling Warde

Sep 08 2025

Stop Tool Training, Start to Learn Test Automation!

Test automation is an integral part of modern software development, primarily focused on enhancing the efficiency and effectiveness of testing processes. But for some reason, after decades of effort, success is still anything but obvious. I think anyone can mention numerous examples where the expected or hoped-for success did not materialise.

You would expect that after all this time, we should be quite skilled at it. Yet, I believe there are a number of reasons that prevent us from fully harnessing the true power of test automation.

Firstly, there is a vast number of tools available, and the pace at which these tools develop and surpass each other is impressively high. Just keeping up with these changes could be a full-time job. But aside from that, we also need to ensure that we don’t end up using a new tool every few months, which we then have to learn, implement, and manage.

It’s not by chance that I’m focusing on the tools here. This is exactly one of the reasons our success is often jeopardised: an excessive focus on the tools! Test automation encompasses so much more than just having a tool to avoid manual testing, yet training programmes almost always focus heavily on tool training.

Let’s Look Beyond the Tools

I am convinced that the likelihood of success significantly increases when we pay more attention to truly understanding what test automation entails. In test automation, you deal with a broad and diverse landscape, extending beyond just test definition and execution. When we look at the whole picture and include test management, test data management, and results and evaluation, all play an essential role. However, we are often not aware of this.

When we look beyond the tools themselves, an entirely new world opens up: that of the relevant aspects impacting the implementation of test automation. Questions that then arise include:

  • Which techniques are we working with and need to be supported?
  • How do we optimally integrate the test (automation) process into existing processes?
  • What does the organisation itself demand and require from us?
  • How does the human factor influence the choices we make?

If we are aware of which questions to ask and how to integrate the answers into our architecture, we increase the likelihood of a successful test automation implementation. Now, the challenge remains… we won’t gain the necessary knowledge if we continue to focus on learning yet another tool.

Let’s Make a Change: Learn About Test Automation!

Fortunately, there are now training programmes available specifically aimed at teaching that knowledge. Certified Test Automation Professional (CTAP) is a good example of this, and I am very enthusiastic about it. The CTAP programme is designed to educate its participants on all critical aspects of test automation.

This training programme focuses on things like:

  • Different areas of application for test automation
  • Aspects that impact test automation
  • Test architecture and tool selection
  • The impact of AI on test automation
  • Principles and methodologies
  • And much more…

It balances important theoretical knowledge with useful practical skills. Armed with that expertise, you will undoubtedly be able to ask the right questions and uncover the necessary answers.

Dutch organisations like the Tax Office (Belastingdienst) and the Social Security Agency (UWV) are already embracing this training and the associated certification, and they are seeing many positive effects. They cite increased quality and a higher level of collaboration around test automation as major advantages. Additionally, it helps to have a common frame of reference and a clear understanding of what the world of test automation entails.

Ready to join the change and significantly increase the chances of success for test automation? Then dive further into the Certified Test Automation Professional programme or sign up for one of our training sessions! More information can be found here – Certified Test Automation Professional CTAP 2.0

Author

Frank van der Kuur

Frank embarked on his IT journey early in life, with a significant portion of his career devoted to the field of testing. Throughout this time, he has supported various organisations in improving the quality of their products. By bridging the gap between process and technology, he strives to enhance the efficiency of testing efforts.

Alongside his role as a practical consultant, Frank is also an enthusiastic trainer. He takes pleasure in helping his peers improve through training on testing tools, the testing process, or on-the-job coaching.

BQA are exhibitors in this years’ AutomationSTAR Conference EXPO. Join us in Amsterdam 10-11 November 2025.

· Categorized: AutomationSTAR · Tagged: 2025, EXPO

Sep 03 2025

Test Gap Analysis: Reveal Untested Changes

In long-lived software, the majority of errors originate in areas of the code that were recently modified. Even with systematic testing processes, a significant portion of such changes often gets shipped to production entirely untested, leading to a much higher probability of field errors. Test Gap Analysis (TGA) is designed to identify code changes that have not been tested, enabling you to systematically reduce this risk.

Scientific background

The efficacy of TGA is supported by scientific studies. One such study on a business information system comprising approximately 340,000 lines of C# code, conducted over 14 months, revealed that about half of all code changes went into production untested, even with a systematically planned and executed test process. Critically, the error probability in this changed, untested code was five times higher than in unchanged code (and also higher than in changed and tested code).

This research, along with comparable analyses conducted in various systems, programming languages, and companies, underscores the challenge of reliably testing changed code in large systems. The core problem is not a lack of tester discipline or effort, but rather the inherent difficulty in identifying all relevant changes without appropriate analytical tools.

How does TGA work?

TGA integrates both static and dynamic analyses to reveal untested code changes. The process involves several key steps:

  • Static Analysis: TGA begins by comparing the current source code of the system under test with the source code at a baseline, e.g., the last release. This identifies newly developed or modified code. The analysis filters out refactorings (e.g., changes in documentation, renaming of methods or code reorganization), which do not alter system behavior, thereby focusing attention on changes that could introduce errors.
  • Dynamic Analysis: Concurrently, TGA collects test coverage data from all executed tests, including both automated and manual test cases. This provides information on which code has been executed during testing.
  • Combination and Visualization: The results from both analyses are then combined to highlight the “Test Gaps” – those code areas that were changed or newly implemented but were not executed during testing. These results are typically visualized using treemaps, where rectangles represent methods or functions (sized proportionally to the amount of code inside them).

On these treemaps:

  • Gray represents methods that have not been changed since the last release.
  • Green indicates methods that were changed (or newly written) and were executed during testing.
  • Orange (and red) signifies methods that were changed (or newly written) but were not executed in any test, highlighting the critical “gaps” where most field errors are likely to occur.

Impact

TGA is highly effective when applied continuously, e.g., to provide feedback to developers on pull/merge requests, to product owners on tickets, or the test managers on dedicated dashboards – enabling informed decisions about additional testing needs. It significantly reduces the amount of untested changes that get shipped, and has shown to reduce field errors by 50%.

Learn more about TGA and other test optimizations in our free online deep dives!

Author

Dr. Sven Amann

Sven is developer and software-quality consultant at CQSE GmbH. He studied computer science at TU Darmstadt and PUC de Rio de Janeiro and did his PhD in software engineering at TU Darmstadt. Sven is a requested (keynote) speaker on software quality topics at conferences, meetups, companies and universities around the world, drawing inspiration from his vast project experience working with CQSE’s many customers across all industries.

CQSE are exhibitors in this years’ AutomationSTAR Conference EXPO. Join us in Amsterdam 10-11 November 2025.

· Categorized: AutomationSTAR · Tagged: EXPO

Sep 01 2025

Leave Complexity Behind: No-Code Test Automation with Low-Code integration via Robot Framework and TestBench

Fast releases demand efficient testing – but traditional test automation is often too complicated. TestBench and Robot Framework combine no-code test design with low-code implementation. The result: test automation that is clear, fast, and flexible – without technical barriers.

Challenges in Test Automation

Agile software development demands high quality and fast delivery cycles. Test automation is essential for this – but comes with two typical hurdles:

1. Separated know-how: QA testers know the requirements, technical automation engineers know how to implement them. However, the other knowledge is often missing.

2. Delays due to disruptions in the processes: Changes to automated tests often require successive work steps between the QA specialist and technical teams – which takes time.

Goal: Specification = Automation

The solution: An approach in which tests are automated directly during specification – without prior technical knowledge on the QA specialist side or detailed domain knowledge on the technical side.

This article shows how a combined no-code/low-code approach with TestBench and the Robot Framework closes precisely this gap.

Expertise and Technology – Thought of Separately, Strong Together

The idea: Professional test design and technical automation are created in specialised tools – TestBench for the specification, Robot Framework for the execution.

Both tools are based on the principle of Keyword Driven Testing and exchange test data and structures via defined interfaces. This enables a clean separation – and efficient interaction at the same time.

Keyword Driven Testing with TestBench

With the test design editor in TestBench, test cases can be put together using drag-and-drop from reusable steps, the so-called keywords – without any programming knowledge. Test data is defined as parameters or in data tables and clearly managed.

The advantages:

• Clarity: Each keyword stands for an action and generates its own test result.

• Reusability: Keywords can be used multiple times and maintained centrally.

• Efficiency: Changes only affect individual keywords – not the entire test case.

Example:

A vehicle consisting of a basic model, special model and several accessories is to be configured. The test sequence required for this is mapped using three keywords:

The parameter list of the test sequence maps the data interface of the test sequence:

The values of the test sequence parameters are stored in data types and can be assigned values in the associated data table:

Each line of this table represents a run of the test sequence with the values from the line, i.e. it represents a specific test case. The specifications of the test cases are created in TestBench, which serve as a template for the implementation of the test automation in the Robot Framework. TestBench therefore represents the no-code part of the solution.

Keyword Driven Testing with Robot Framework

The Robot Framework also relies on the Keyword Driven approach, in which test steps are described by reusable commands, the keywords. The advantage: The tests are structured in tabular form, are easy to read and can also be understood by non-technicians. However, basic programming knowledge is helpful for implementing technical keywords. Robot Framework comes with many standard libraries (e.g. BuiltIn, OperatingSystem, Collections) and can be extended by hundreds of specialised libraries – for example for web tests with Selenium or Browser Library, SAP with Robosapiens, databases, APIs and much more. Customised libraries only need to be developed for very specific requirements. The tests themselves are usually created in VS Code or IntelliJ IDEA – supplemented by plugins such as RobotCode, which enable syntax highlighting, code completion and test execution directly in the IDE.

Example: Vehicle configuration by keywords

A simple test case, e.g. for configuring a vehicle, could look like this:

In this illustration, each step of the test procedure is described using a single keyword – such as Select Base Model, Select Special Model or Select Accessory. Each keyword takes on a clearly defined task – and can be reused several times.

The technical realisation takes place on several levels: A keyword such as Select Base Model calls up other keywords internally – for example to find UI elements, wait for visibility and make a selection.

Transparency at every level: the Robot Framework protocol

A major advantage of the Robot Framework is the detailed logging of each test step – including all keywords called, times, success or error messages:

As can be seen in the example, the Robot Framework not only documents the process of the keyword select base model but also shows all internal steps – from the mouse click to the selection of the correct option. Errors can therefore be analysed down to the lowest level.

Technology that adapts – and integrates

Thanks to the large selection of libraries, almost all technologies can be tested with Robot Framework – web, mobile, desktop, databases, APIs, SAP, etc. Different libraries can even be combined within a test case, enabling flexible end-to-end scenarios: from web browser to SAP to mobile app – and back again.

In combination with TestBench’s no-code design, this results in a consistent, efficient automation approach: QA testers specify – technology implements. Fast, legible and robust.

Test Design Meets Automation – Seamlessly Synchronised

The test cases specified in TestBench are transferred directly to the Robot Framework – where they are implemented with technical keywords. Conversely, already realised keywords from Robot Framework can be transferred back into TestBench and used for new tests.

The speciality:

Top-down and bottom-up work simultaneously.

QA testers create scenarios, technical teams develop the appropriate keywords. Everything remains synchronised via a dedicated TestBench extension for Robot Framework – without media discontinuity or loss of time.

TestBench acts as a leading system: it generates test cases, expects ready-made keywords – and controls the execution. The test data is also transferred directly, which does not require any separate data logic on the technical side.

Result: Specification = Automation.

Advantages of the No-Code/Low-Code Combination

The integrated solution consisting of TestBench and Robot Framework offers numerous advantages:

Simple test design: Users with domain knowhow create tests without detailed technical knowledge.

Centralised keyword management: Domain specific and technical keywords are clearly separated – but centrally available.

Efficient collaboration: Clearly defined responsibilities, seamless exchange.

High maintainability: Keywords can be maintained with pinpoint accuracy, changes are implemented quickly.

Parallel working: Specification and implementation take place simultaneously – without dependencies.

Future-proof: The architecture remains flexible and expandable – especially in the rapidly developing technology sector.

Conclusion: Test automation that adapts – not the other way round

The combination of no-code test design with low-code automation elegantly solves the typical challenges in test automation:

• Domain specialists and technical teams work together efficiently.

• Automated tests are created at an early stage – and adapt with flexibility.

• The solution scales – from small scenarios to large test suites.

Compared to pure no-code solutions, the approach is significantly more flexible, sustainable and technically robust – making it a future-proof choice for professional test automation.

Would you like to find out more about this topic? Then download our whitepaper now!

Author

Dierk Engelhardt

Dierk Engelhardt is TestBench product manager at imbus AG. With his many years of experience, he advises and supports customers in the introduction and customized integration of tools into the software development process. He also advises on setting up agile teams and integrating testing into the agile context.“

Imbus are exhibitors in this years’ AutomationSTAR Conference EXPO. Join us in Amsterdam 10-11 November 2025.

· Categorized: AutomationSTAR · Tagged: 2025, EXPO

Aug 27 2025

EU Regulations: The Example of the European Accessibility Act

Starting in mid-2025, the legal obligation to verify the accessibility of websites in the EU will extend to the private sector. This affects companies from various industries such as e-commerce, banking, and insurance, which must ensure their digital offerings are accessible. The public sector in Germany, for instance, was already required to test both internal and external websites for accessibility and ensure they comply with the standards of the Barrierefreie-Informationstechnik-Verordnung (BITV), based on the Web Content Accessibility Guidelines (WCAG). This is just one of many upcoming EU regulations, including product liability directives and the AI Act.

With this extension, it is expected that there will be waves of legal warnings and fines similar to those after the introduction of GDPR. This increases the pressure on service providers and operators of digital offerings to review and adapt their websites according to legal requirements.

However, current technology only allows a small portion of WCAG requirements to be tested automatically. Many aspects of accessibility still need to be manually tested and results from automations have to be reviewed by a human tester, making the process more time- consuming and resource intensive. The Accessibility Conformance Testing (ACT) working group of the W3C is developing conformity rules that include both manual and partially automated testing methods to enable a more comprehensive assessment of WCAG compliance. About 30% of the success criteria of WCAG 2.2 are considered automatable in a classical way. With the use of AI, better coverage is possible here. webmate can provide automation for up to 60% of the success criteria.

Shift Left for Accessibility

Accessibility is a crucial aspect of software development that has often been neglected. To effectively address and prevent accessibility issues, they should be integrated early into the testing routine. One way to do this is by using tools like webmate, which already can provide valuable insights during the development phase and shine in quality assurance and documentation processes.

The upcoming legal changes and technological developments make it clear that companies must take proactive measures to meet accessibility requirements and avoid legal risks. By automating tests in pipelines and CI/CD processes, continuous accessibility testing can be ensured without the need for additional manual testing efforts. The “Shift Left” approach in testing means that developers should consider accessibility from the initial implementation of a feature, rather than waiting until the end of the development process.

Manual testing can also capture aspects of accessibility that automated tools might overlook. However, manual testing alone is not sufficient to guarantee comprehensive accessibility. A combination of automated tests and manual feedback is necessary to obtain a complete picture of accessibility issues and ensure that all user groups can equally benefit from the software.

The Need for Auditing

Regular audits ensure the traceability and documentation of accessibility measures. This is particularly important for compliance with national and international standards. Audits enable systematic reviews of existing processes and ensure that all aspects of accessibility are continuously maintained and improved. This way, the Accessibility Enhancement Act can be easily fulfilled.

Audits should be considered a complementary part of regular testing processes. While tests help identify problems early, audits provide a comprehensive assessment and documentation of the overall accessibility situation. The audit results also serve as excellent proof. This can create a well-rounded and effective accessibility management system that meets both legal requirements and the needs of all users.

Due to the lengthy and time-consuming process, it is often difficult to meet all components and regulations. With the help of specialized tools, audits can be repeatedly conducted, and compliance is achieved. In my opinion, only those who embrace the EU regulations can act in a future-relevant manner.

Testfabrik: Driving Quality, Reducing Risk, Accelerating Delivery.

In today’s high-stakes digital landscape, leading enterprises trust webmate to deliver end- to-end testing solutions that align with business goals. From compliance to performance, we help de-risk deployments, streamline operations, and future proof for upcoming requirements. Without slowing down innovation.

Author

Tim Böhm

Tim Böhm is a tech enthusiast and passionate teacher. During and after his computer science studies, he worked at Testfabrik Consulting + Solutions AG in both product development and marketing. Since 2024, he has been leading the sales and marketing department for webmate and regularly appears as an instructor in training sessions and as a speaker at webinars and events.

Testfabrik are exhibitors in this years’ AutomationSTAR Conference EXPO. Join us in Amsterdam 10-11 November 2025.

· Categorized: AutomationSTAR · Tagged: 2025, exhibitor

Aug 25 2025

Allure Start: tech stack as a service

The diversity of modern testing and development tools forces us to experiment a lot with the tech stack.

Suppose our team created a JavaScript project with tests, and then developers decided to switch to Python. Now, testers are presented with a choice: continue writing E2E tests in Playwright-JS, or switch to Pytest + Playwright.

Taking the time to properly analyze a decision is never easy, because it means you have to stop working. The more tests you have, the more difficult and time-consuming the analysis becomes.

What is the comparative performance of your test suite with different frameworks? How difficult will it be to implement screenshot highlighting with our new stack? Each comparison means you need to spend at least several hours on environment setup. At best, you lose a lot of time, at worst, you delay decisions and accrue technical debt.

This is an issue that the Allure team has been facing a lot, since creating a language-agnostic reporter meant working with lots of different tech stacks. To simplify environment setup, the team wrote Allure Start, an open-source tool that allows you to customize a tech stack with a few mouse clicks and then download a fully configured project environment.

To get a feel for how much time this saves, let’s go through the steps of setting up an environment for a project with Pytest and Playwright (say, for experimenting with screenshots for automated tests). First, we’ll take the usual route, and then see how Allure Start changes the process.

The usual route

We assume that Python is already installed on the system, Allure Report is available, and an IDE is configured (let’s say, Cursor). It’s probably a good idea to make sure everything is working (run `python -–version’, ‘pip –version’, and ‘allure –version’ in the terminal).

Next, we follow these steps:

  • We create a new project in Cursor, add a test folder for tests, and a pyproject.toml file for metadata.
  • Create a local Python virtual environment, so that we don’t pollute the common Python with our project’s dependencies: `python -m venv .venv`.
  • Activate the environment: `. ./.venv/bin/activate` (Windows/Linux) or `./.venv/Scripts/Activate.ps1` (Windows).
  • Update pip dependencies: `python -m pip install –upgrade pip setuptools wheel`.
  • Use pip to install dependencies: ‘pip install pytest pytest-playwright allure-pytest allure-python-commons’. This will create a requirements.txt file that lists all dependencies; once it’s created, installing dependencies can be done via `pip install -r ./requirements.txt`.
  • Playwright is going to need browsers to work: `playwright install`.
  • It’s a good idea to freeze pip dependencies, so that future updates of Pytest or Playwright do not break our code: `pip freeze ./requirements.txt`

If everything has been hooked up properly, we can finally start writing our tests, and run them with the `pytest` command:

The usual route: switching package managers

As our experiment evolves, we might decide to switch to a different package manager — in more serious projects, poetry is usually used instead of pip. Installing and configuring poetry can take a while as well. For instance, do you remember off-hand how to configure it to use an in-project virtual environment? ChatGPT (or Cursor) does speed up this process, but it still takes a while. Keep in mind that what we’ve just described is the best case scenario, with no time spent on debugging — which is a very optimistic assumption.

The same with Allure Start

The entire process from the previous section is replaced by literally a few mouse clicks. We go to https://allurereport.org/start/, and select Python -> Pytest.

Next, we specify project metadata and download an archive that can be immediately opened in your favorite IDE. Playwright will need to be added separately, but everything else is ready. Importantly, the environment is stable and debugged.

Note that the project you’ve just downloaded contains some elements that are absent in the manual project we’ve set up above:

  1. An src folder with a placeholder for our system under test.
  2. Two scripts: run.bat for Windows and run.sh for Linux/Mac. These pretty much repeat all the steps we’ve done manually (creating a virtual environment, updating the package manager, etc.)

Now, for the second case we’ve discussed above: switching from pip to poetry. Allure Start allows us to change the package manager with literally one click:

Now download a new project, and if the old project already has some code, copy it. That’s it.

Use cases

Allure Start saves time on virtually every project, but some use cases stand out in particular.

1.Experimentation and decision-making.

That’s an obvious one, we’ve discussed it at the start. If you want to compare different frameworks, each comparison means an extra few hours setting up the environment; Allure Start cuts that time down drastically.

2. Tech Support

When someone comes to you complaining that in their setup, Jest version something.something glitches out, you need to replicate their stack if you want to reproduce their problem. Even if you’ve already worked under that exact setup, it will take you a few hours to refresh all the dependencies and ensure everything works properly.

3. Onboarding and education.

Switching to a new tech stack becomes a much easier affair if the environment is stable and debugged. This is important both for new hires, for teams that want to switch instruments, and for people new to the industry. Having just learned how to write selectors, you might be eager to get some testing done. But then it turns out you need to spend several frustrating days setting up the IDE and all the dependencies. Not fun.

Conclusion

The modern testing tech stack is incredibly variable, and you can find all kinds of permutations out there. That limits the number of people in the community who have worked on a particular permutation, makes problem-solving more difficult, and complicates experimenting with new tools.

Allure Start helps you try out different combinations of tools error-free and quickly, with just a few clicks. That means:

  • Less time spent on decisions to adopt or upgrade technologies
  • Way quicker diagnosing of technical problems on uncommon tech stacks
  • Lower barrier of entry for people who are just beginning to learn test automation

Try it and tell us what you think!

Author

Dmitry Baev

Author of Allure Framework & CTO of Qameta Software

Allure Report are event sponsors in this years’ AutomationSTAR Conference EXPO. Join us in Amsterdam 10-11 November 2025.

· Categorized: AutomationSTAR · Tagged: 2025, event sponsors

  • « Go to Previous Page
  • Page 1
  • Page 2
  • Page 3
  • Go to Next Page »

Copyright © 2026 · Impressum · Privacy · T&C

part of the