• Skip to main content
Super Early Bird - Save 30% | Teams save up to 25%

AutomationSTAR

Test Automation Conference Europe

  • Programme
    • AutomationSTAR Team
    • 2025 programme
  • Attend
    • Why Attend
    • Volunteer
    • Location
    • Get approval
    • Bring your Team
    • 2025 Gallery
    • Testimonials
    • Community Hub
  • Exhibit
    • Partner Opportunities
    • Download EXPO Brochure
  • About Us
    • FAQ
    • Blog
    • Test Automation Patterns Wiki
    • Code of Conduct
    • Contact Us
  • Tickets

EXPO

Oct 30 2023

The Importance of Code Quality: Production vs. Test

Code quality is a topic of prime importance in any software development cycle. It influences not only code functionality but also its maintainability, scalability, and long-term viability. Unfortunately, while we hope it is done for production code, it is often overlooked for test code. As a QA Manager, it’s crucial to understand and advocate for the necessity of high-quality code in both production and testing. Before discussing test code quality, let’s differentiate between test scripts and test code; it’s a crucial consideration.

Test Scripts vs. Test Code: The Distinction

We typically create Test scripts in one of two ways: using record-and-playback tools, then converting the results into scripts; or writing scripts line by line, action by action, as independent tests with limited or no reusability. While scripts seem simple to create and may be quick to deploy, they typically lack flexibility and are brittle, making them susceptible to breaking when things change. This can lead to regression gaps with abandoned scripts and other undesirable results. Test scripts are most suitable for simple, static workflows.

Conversely, test code involves writing code for test automation, ranging from unit tests to API tests to end-to-end tests. This approach often utilizes SDETs (Software Development Engineers in Test) or developers who know object-oriented principles, page object models, and data-driven testing principles. Typically needing fewer resources, it offers superior flexibility, robustness, and efficiency, making it ideal for complex projects with longer lifecycles or systems integral to an organization.

Why Does Code Quality Matter?

Code quality is the cornerstone of any reliable, robust, and efficient software system. Poor code quality can lead to bugs, security vulnerabilities, performance issues, and even system failures. On the other hand, high-quality code addresses these concerns while being easier to read, maintain, and extend, reducing the time and resources needed for debugging and shortening release cycles. High-quality code is good.

While the importance of production code quality is evident, test code quality is often overlooked, if not wholly ignored. We know what can happen with poor production code. Similarly, poor test code can lead to reduced regression testing, missed requirements, or even false positives or negatives, which are notoriously difficult to find. Good test code quality helps resolve these issues in the same way good production code quality helps resolve problems.

Remember, testing is your primary defense against software bugs and defects.

What’s More Important, Production Code Quality Or Test Code Quality?

Let me preface this with my experience across many large enterprise organizations: Code quality is challenging at the best of times, even with organizations having the best intent. And as large projects start running into delays, code quality is often an early victim.

A woman in an orange/red shirt thinking about a puzzling questionBack to the question: both production and test code quality are critical, but test code quality is more important than production code quality. Why, you ask?

Let’s answer that with another question. Which is better: a system with the best code ever but does not work as expected or a poorly written, unmaintainable system that does everything expected?  

We all agree that a system that works as expected is preferred, regardless of the quality of its code. 

The nature of development means bugs; the issue is how quickly you identify and fix them. The better you test, the quicker you find bugs. And the quicker you find bugs, the faster you fix them. So good testing is critical, and the more maintainable and sustainable your tests are, the quicker they can adapt as your system changes, letting you identify bugs sooner.  

Since testing ensures the quality of your production system, and production code quality does not guarantee functionality, test code quality is critical to the success of your test system and, ultimately, your production system, which needs to work, regardless of its code quality. 

Summary 

This article underscores the importance of code quality for both software development and testing. It further differentiates between automated test scripts and test code, accentuating the latter’s superiority. We also see that test code quality can compensate for deficiencies in production code quality while protecting against bugs, explaining how test code quality adds as much if not more, value than production code quality.   

Are you ready to challenge your views and see test code quality in a new light?

Author 

Kim Filiatrault Founder and President

Kim has been in the IT industry for over 35 years, most of it in highly technical areas including generative technologies and test automation. In 2005, Kim specialized in Insurtech and helped many enterprise customers implement large projects. More recently, his company released CenterTest, its new test automation technology, based on his decades of test automation and generative technologies expertise and they have started helping companies revolutionize the way they test.

When not working, Kim enjoys off-roading with his Jeep, going to the UT Longhorns Football games, serving his local communities, and traveling. 

Kimputing are an Exhibitor Sponsor at AutomationSTAR 20-21 Nov. 2023 in Berlin 

Kim Filiatrault

· Categorized: AutomationSTAR · Tagged: 2023, EXPO

Oct 26 2023

How To Turn Secure Planning into Secure Delivery?

It’s the year 2000. The millennium problem had just been conquered and mobile phones were only used by big shots. I had just graduated and worked on some small local projects, when the opportunity came along to join a major project for a large international company. While I was quickly working on my English vocabulary and pronunciation, I took my first steps in the world of SAP and a whole landscape of connected applications. I started working together with team members from different parts of the world. Since interaction was only done via e-mail and phone calls, it was an exciting outlook to meet people in real life when after one year of working from a distance a central on-site user acceptance test was planned.

The test activities should be executed in Atlanta, Georgia. It was my first traveling to the US and even my first time flying in general. Since I wanted to be fit and well prepared on the Monday morning, I travelled two days early. So did other colleagues, and on Sunday I met some of the people that I recognized from the voices on the many phone calls. In the evening there were already around 30 people from Asia, Europe, and the America’s, all travelled around and prepared to start the acceptance testing.

That Monday morning the kick off started punctually at 9am. Introductions, instructions and test scripts were dealt with and laptops and desktops were switched on. After the first coffee rumor spread some people couldn’t connect to one of the main test systems. I tried myself and strange enough I had the same problem. Shortly after 11am it appeared nobody could connect, and it became very crowded in the coffee corner. The test manager was making loud calls and busy conversations and looked kind of stressed. Around noon it appeared that the main test system was down for planned maintenance and according to planning it would only be back on Wednesday evening.

There we were, 30 people travelling in their weekend from all over the world, staying in hotels and being together for one week to work in a single room somewhere in the world. Unfortunately the first three days of the week we couldn’t do anything because of an unknown planning conflict with another team. A big disappointment for the world travelers, and even worse for the project and the company.
The experience from the year 2000 always stayed in my mind and now that I have work experience in many other companies I can conclude that those kind of issue keep popping up at times. Projects and teams have secure individual plannings but are not fully aware of conflicts with other projects and activities. Many times that lack of awareness leads to unexpected system unavailability, which in turn leads to running out of plannings and deadlines. From those experiences the idea arose to develop software to help companies getting central insight into the availability of systems in their system landscape. This idea was turned into an actual development project in 2018 when one of our customers was looking for tooling in the market but couldn’t find anything. The first version of ERMplanner was born.

Today my company is working on version 2.7 of ERMplanner. To complete the circle, we recently had contact with the company were it all started in the year 2000. Some of the people from the Atlanta test week are still around and believe it or not, similar issues are still occurring today. The company is very interested in the tooling that we have built over the years. Two weeks ago we had an implementation workshop and in a few weeks a pilot is started to work with ERMplanner and get better insight in all the activities affecting their system availability. A successful implementation in this company would be the crowning glory of our work and I could even think of retiring.

Author

Ronald Vreugdenhil, Founder of ERMplanner

Ronald Vreugdenhil studied Computer Science and worked as a consultant in the SAP logistics and workforce management areas. He has over 20 years of national and international project experience.

Since 2009 he is co-owner of PeachGroup, helping organizations to improve their service and maintenance processes. In 2017 he founded ERMplanner. ERMplanner is standard software to turn your release planning into reliable deliveries. It prevents conflicts between individual schedules of release-, change-, project- and test managers so that all planned work can be carried out according to schedule.

ERMplanner are a Gold Sponsor at AutomationSTAR 20-21 Nov. 2023 in Berlin

· Categorized: AutomationSTAR · Tagged: 2023, EXPO

Oct 24 2023

Allure Report Is More Than A Pretty Report

Behind the pretty HTML cover of Allure Report is the idea that QA should be the responsibility of the entire team, not just QA – which means that test results should be accessible and readable by people without the QA or dev skill set. Report allows you to move past the details that don’t help you, staying at your preferred level of abstraction – and yet if you do need to drill into the code, it’s just a few mouse clicks away.

Report achieves this basic goal by being language-, framework-, and tool-agnostic. It can hide the peculiarities of your tech stack because it doesn’t depend upon it. So how does one become agnostic? You can’t do it through magic, you have to write tons of integrations, literally hundreds of thousands lines of code to integrate with anything and everything. Allure Report is a hub of integrations, and its structure is designed specifically with the purpose of making new integrations easier.

Let’s imagine that we’re writing a new integration for Report, and look at what resources we can leverage to make our job easier. We will be comparing how much effort we need to apply with Report and with other tools. We will start with the most straightforward advantages – the existing codebase; and then talk about more fundamental stuff like architecture and knowledge base.

Selenide native vs Selenide in Allure Report

To begin with, let us compare native reporting for Selenide with the way Selenide is integrated in Allure Report, and then see how difficult it was to write the integration for Report.

While creating simple reporting for Selenide is relatively easy, it’s a completely different story if you want to make quality test reports. In JUnit, there is only one extension point – the exception that is being thrown on test failure. You can jam the meta-information for the report into that exception, but working with this information will be difficult.

By default, Selenide and most other tools take the easy road. When Selenide reports on a failed test, what you get is just the text of the exception, a screenshot, and the HTML of the page at the time of failure:

If you’re the only tester on the project and all the tests are fresh in your memory, this might be more than enough – which is what the developers of Selenide are telling us.

Now, let’s compare this to Allure Report. If you run Report on a Selenide test with nothing plugged in, you’ll get just the text of the exception, same as with Selenide’s report.

But, as I’ve said before, the power of Allure Report is in its integrations. Things will change if we turn on allure-selenide and an integration for the framework you’re using (in this case – allure-junit). First (this is specific to the Selenide integration), we’re going to have to add the following line at the beginning of our test (or as a separate function with a @BeforeAll annotation):

SelenideLogger.addListener(“AllureSelenide”, new AllureSelenide());

Now, our test results have steps in them, and you can see precisely where the test has failed:

This can help you figure out why the test failed (whether the problem is in the test or in the code). You also get screenshots and the page source. Finally, with these integrations, you can wrap the function calls of your test inside the step() function or use the @Step annotation for functions you write yourself. This way, the steps displayed in test results will have custom names that you’ve written in human language, not lines of code. This makes the test results readable by people who don’t write Java (other testers, managers etc.). Adding all the steps might seem like a lot of extra work, but in the long run it actually saves time, because instead of answering a bunch of questions from other people in your company you can just direct them to test results written in plain English.

This is powerful stuff compared to what Selenide (and most other tools) offer as default reports. So here’s the main question for this article: how much effort did it take to achieve this? The source code for the allure-selenide integration is about 250 lines long. Considering the functionality that this provides, that’s almost nothing. Writing such an integration would probably be as easy as providing the bare exception that we get if we use Selenide’s native reporting.

This is the main takeaway: a proper integration with Allure Report takes about as much effort as a quick and easy integration with other tools (provided we’re talking about a language where Report has an established code base, such as Java or Python). How is that possible?

Common Libraries

The 250 lines of code in allure-selenide leverage files with about 500 lines of code from the allure-model section of allure-java, and about 1300 lines from allure-java-commons. These common libraries have been created to ease the process of making new integrations – and there are more than a dozen for Java alone that utilize these common libraries.

Writing these libraries is not a straightforward task. There are problems of execution here that can be extremely difficult to solve. For instance, when writing the allure-go integration, Anton Sinyaev spent several months solving the issue of parallel test execution (an issue which was left unsolved for 8 years in testify, the framework from which allure-go was forked). Such problems can be unique for a particular framework, which makes writing common libraries difficult. Generally speaking, once the process has been smoothed out, writing an integration for a framework like JUnit might take a month of work; but if there are no common libraries present, you could be looking at 4 or 5 months.

The JSON with the results

Let’s go deeper. What if we’re writing an integration for an entirely new language? Since the language is different, none of the code can be reused. Here, the example with Go is particularly telling, since it is quite unlike Java or Python, both in basic things like lack of classes, and in the way it works with threads. Because of this, not only was it not possible to reuse the code, but even the general solutions couldn’t be translated from one language to another. Then what HAS been reused in that case?

Arguably the most important part of Allure Report is its data format, the JSON file which stores the results of test runs. This is the meeting point for all languages, the thing that makes Allure Report language-agnostic. Designing that format took about a year, and it has incorporated some major architectural decisions – which means if you’re writing a new integration, you no longer have to think about this stuff. Thanks to this, the first, raw version of allure-go was written over a weekend – although it took several months to solve problems of execution and work out the kinks.

Experience

Finally, there is the least tangible asset of all – experience. Writing integrations is a peculiar field of programming, and a person skilled in it will be much more productive than someone who is just talented and generally experienced. If one had to guess, it would probably take 10 people about 2–3 years to re-do the work that’s been done on Allure Report, with one developer for each of the major languages and its common libraries, 2 or 3 devs for the reporter itself, an architect, and someone to work with the community.

Community

Allure Report’s community is not an asset strictly speaking, but when creating a new integration, it actually provides an extremely important role in several ways.

  1. DEMAND. As we’ve already said, adding test reporting to a framework or a tool can take months of work if done properly. If you’re doing this purely for your own comfort, you’ll probably cut a lot of corners, do things quick and dirty. If, on the other hand, you’re working on something that is going to be used by millions of people, that’s motivation enough to sit around for an extra month or two and provide, say, proper parallel execution of tests.
  2. EXPERIENCED DEVELOPERS. Here, we’re kind of returning to the previous section: the open-source nature of the tool allowed Qameta to get in touch with plenty of developers experienced in writing integrations, and hire from that pool.
  3. THE INTEGRATIONS THEMSELVES. Allure report didn’t start out as a tool designed to integrate with anything and everything – the first version was just built for Junit 4 and Python. Pretty much everything outside allure-java and allure-python was initially developed outside Qameta, and then verified and internalized by the company.

All of this has been possible because there are many developers out there for whom Allure Report is a default tool – they are the bedrock of the community.

Conclusion

The structure of Allure Report didn’t appear all at once, like Athena did from the head of Zeus. It took many years of thinking, planning, and re-iterating on community feedback. What emerged as a result was a tool that was purpose-built to be extensible and to smooth out the creation of new integrations. Today, expanding upon this labor means leveraging the code, experience and architectural decisions that have been accumulated over the years.

If you’d like to learn more about Allure Report, we’ve recently created a dedicated site. Naturally, there’s documentation, as well as detailed info on all the integrations (under “Modules”). See if you can find your language and test framework there! And we’re planning to add much more stuff in the future, like guides, so don’t be a stranger and pay us a visit.

Author

Artem Eroshenko

CPO and Co-Founder of Qameta Software

Qameta Software are a Gold Sponsor at AutomationSTAR 20-21 Nov. 2023 in Berlin

· Categorized: AutomationSTAR · Tagged: 2023, EXPO, Gold Sponsor

Oct 16 2023

Prompt-Driven Test Automation

Bridging the Gap Between QA and Automation with AI

In the modern software development landscape, test automation is often a topic of intense debate. Some view it strictly as a segment of Quality Assurance, while others, like myself, believe it intersects both the realms of QA and programming. The Venn diagram I previously shared visualizes this overlap.

Historically, there’s a clear distinction between the competencies required for QA work and those needed for programming:

Skills Required for QA Work:

  • Critical Thinking: The ability to design effective test cases and identify intricate flaws in complex systems
  • Attention to Details: The ability to ensure that minor issues are caught before they escalate into major defects.
  • Domain knowledge: A thorough understanding of technical requirements and business objectives to align QA work effectively.

Skills Required for Programming:

  • Logical Imagination: The capability to deconstruct complex test scenarios into segmented, methodical tasks ripe for efficient automation.
  • Coding: The proficiency to translate intuitive test steps into automated scripts that a machine can execute.
  • Debugging: The systematic approach to isolate issues in test scripts and rectify them to ensure the highest level of reliability.

We’re currently at an AI-driven crossroads, presenting two potential scenarios for the future of QA. One, where AI gradually assumes the roles traditionally filled by QA professionals, and another, where QAs harness the power of AI to elevate and redefine their positions.

This evolution not only concerns the realm of Quality Assurance but also hints at broader implications for the job market as a whole. Will AI technologies become the tools of a select few, centralizing the labor market? Or will they serve as instruments of empowerment, broadening the horizons of high-skill jobs by filling existing skill gaps?

I’m inclined toward the latter perspective. For QA teams to thrive in this evolving ecosystem, they must identify and utilize tools that bolster their strengths, especially in areas where developers have traditionally dominated.

So, what characterizes such a tool? At Loadmill, our exploration of this question has yielded some insights. To navigate this AI-augmented future, QAs require:

  • AI-Driven Test Creation: A mechanism that translates observed user scenarios into robust test cases.
  • AI-Assisted Test Maintenance: An automated system that continually refines tests, using AI to detect discrepancies and implement adjustments.
  • AI-Enabled Test Analysis: A process that deploys AI for sifting through vast amounts of test results, identifying patterns, and highlighting concerns.

When it comes to actualizing AI-driven test creation, there are two predominant methodologies. The code-centric method, exemplified by tools like GitHub Code Pilot, leans heavily on the existing codebase to derive tests. While this method excels in generating unit tests, its scope is inherently limited to the behavior dictated by the current code, making it somewhat narrow-sighted.

Contrarily, Loadmill champions the behavior-centric approach. An AI system that allows QA engineers to capture user interactions or describe them in plain English to create automated test scripts. The AI then undertakes the task of converting this human-friendly narrative into corresponding test code. This integration of AI doesn’t halt here – it extends its efficiencies to areas of test maintenance and result analysis, notably speeding up tasks that historically were time-intensive.

In sum, as the realms of QA and programming converge, opportunities for innovation and progress emerge. AI’s rapid advancements prompt crucial questions about the direction of QA and the broader job market. At Loadmill, we’re committed to ensuring that, in this changing landscape, QAs are not just participants but pioneers. I extend an invitation to all attendees of the upcoming conference: visit our booth in the expo hall. Let’s delve deeper into this conversation and explore how AI can be a game-changer for your QA processes.

For further insights and discussions, please engage with us at the Loadmill booth. 

Author

Ido Cohen, founder and CEO of Loadmill.

Ido Cohen is the Co-founder and CEO of Loadmill. With over a decade of experience as both a hands-on developer and manager, he’s dedicated to driving productivity and building effective automation tools. Guided by his past experience in coding, he continuously strives to create practical, user-centric solutions. In his free time, Ido enjoys chess, history, and vintage video games.

Loadmill are a Gold Sponsor at AutomationSTAR 20-21 Nov. 2023 in Berlin

· Categorized: AutomationSTAR, test automation · Tagged: 2023, EXPO, Gold Sponsor

  • « Go to Previous Page
  • Page 1
  • Page 2
  • Page 3
  • Page 4

Copyright © 2026 · Impressum · Privacy · T&C

part of the