• Skip to main content
Programme Launch: Save 25% by 30th June – Prices Rise After!

AutomationSTAR

Test Automation Conference Europe

  • Programme
    • Survey Results
    • 2025 Programme
  • Attend
    • 2025 Location
    • Bring your Team
    • 2024 Photos
    • Testimonials
    • Community Hub
  • Exhibit
    • Partner Opportunities
    • Download EXPO Brochure
  • About Us
    • FAQ
    • Blog
    • Test Automation Patterns Wiki
    • Code of Conduct
    • Contact Us
  • Tickets

2023

Nov 06 2023

Efficient Software Testing in 2023: Trends, AI Collaboration and Tools

In the rapidly evolving field of software development, efficient software testing has emerged as a critical component in the quality assurance process. As we navigate through 2023, several prominent trends are shaping the landscape of software testing, with artificial intelligence (AI) taking center stage. We’ll delve into the current state of software testing, focusing on the latest trends, the increasing collaboration with AI, and the most innovative tools.

Test Automation Trends

Being aware of QA trends is critical. By staying up to date on the latest developments and practices in quality assurance, professionals can adapt their approaches to meet evolving industry standards. Based on the World Quality Report by Capgemini & Sogeti, and The State of Testing by PractiTest, popular QA trends currently include:

  • Test Automation: Increasing adoption for efficient and comprehensive testing.
  • Shift-Left and Shift-Right Testing: Early testing and testing in production environments for improved quality.
  • Agile and DevOps Practices: Integrating testing in Agile workflows and embracing DevOps principles.
  • AI and Machine Learning: Utilizing AI/ML for intelligent test automation and predictive analytics.
  • Continuous Testing: Seamless and comprehensive testing throughout the software delivery process.
  • Cloud-Based Testing: Leveraging cloud computing for scalable and cost-effective testing environments.
  • Robotic Process Automation (RPA): Automating repetitive testing tasks and processes to enhance efficiency and accuracy.

QA and AI Collaboration

It’s no secret that AI is transforming our lives, and ChatGPT’s collaboration can automate a substantial portion of QA routines. We’ve compiled a list of helpful prompts to streamline your testing process and save time.

Test Case Generation

Here are some prompts to assist in generating test cases using AI:

“Generate test cases for {function_name} considering all possible input scenarios.”
“Create a set of boundary test cases for {module_name} to validate edge cases.”
“Design test cases to verify the integration of {component_A} and {component_B}.”
“Construct test cases for {feature_name} to validate its response under different conditions.”
“Produce test cases to assess the performance of {API_name} with varying loads.”
“Develop test cases to check the error handling and exceptions in {class_name}.”

Feel free to modify these prompts to better suit your specific testing requirements.

Example

We asked for a test case to be generated for a registration process with specific fields: First Name, Last Name, Address, and City.

AI provided a test case named “User Registration” for the scenario where a user attempts to register with valid inputs for the required fields. The test case includes preconditions, test steps, test data, and the expected result.

Test Code Generation

In the same way, you can create automated tests for web pages and their test scenarios.

To enhance the relevance of the generated code, it is important to leverage your expertise in test automation. We recommend studying the tutorial and using appropriate tools, such as JetBrains Aqua, to write your tests that provide tangible examples of automatically generating UI tests for web pages.

Progressive Tools

Using advanced tools for test automation is essential because they enhance efficiency by streamlining the testing process and providing features like test code generation and code insights. These tools also promote scalability, allowing for the management and execution of many tests as complex software systems grow.

UI Test Automation

To efficiently explore a web page and identify available locators:
Open the desired page.
iInteract with the web elements by clicking on them.
Add the generated code to your Page Object.

This approach allows for a systematic and effective way of discovering and incorporating locators into your test automation framework.

Code Insights

To efficiently search for available locators based on substrings or attributes, you can leverage autocompletion functionality provided by the JetBrains Aqua IDE or plugin.

In cases where you don’t remember the location to which a locator leads, you can navigate seamlessly between the web element and the corresponding source code. This allows you to quickly locate and understand the context of the locator, making it easier to maintain and modify your test automation scripts. This flexibility facilitates efficient troubleshooting and enhances the overall development experience.

Test Case As A Code

The Test Case As A Code approach is valuable for integrating manual testing and test automation. Creating test cases alongside the code enables close collaboration between manual testers and automation engineers. New test cases can be easily attached to their corresponding automation tests and removed once automated. Synchronization between manual and automated tests to ensure consistency and accuracy is a challenge that does not need to be addressed. Additionally, leveraging version control systems (VCS) offers additional benefits such as versioning, collaboration, and traceability, enhancing the overall test development process.

Stay Tuned

The industry’s rapid development is exciting, and we are proud to be a part of this growth. We have created JetBrains Aqua, an IDE specifically designed for test automation. With Aqua, we aim to provide a cutting-edge solution that empowers testers and QA professionals. Stay tuned for more updates as we continue to innovate and contribute to the dynamic test automation field!

Author

Alexandra Psheborovskaya, (Alex Pshe)

Alexandra works as a SDET and a Product Manager on the Aqua team at JetBrains. She shares her knowledge with others by mentoring QA colleagues, such as in Women In Tech programs, supporting women in testing as a Women Techmakers Ambassador, hosting a quality podcast, and speaking at professional conferences.

JetBrains is an EXPO Gold partner at AutomationSTAR 2023, join us in Berlin

· Categorized: AutomationSTAR, test automation · Tagged: 2023, Gold Sponsor

Oct 30 2023

The Importance of Code Quality: Production vs. Test

Code quality is a topic of prime importance in any software development cycle. It influences not only code functionality but also its maintainability, scalability, and long-term viability. Unfortunately, while we hope it is done for production code, it is often overlooked for test code. As a QA Manager, it’s crucial to understand and advocate for the necessity of high-quality code in both production and testing. Before discussing test code quality, let’s differentiate between test scripts and test code; it’s a crucial consideration.

Test Scripts vs. Test Code: The Distinction

We typically create Test scripts in one of two ways: using record-and-playback tools, then converting the results into scripts; or writing scripts line by line, action by action, as independent tests with limited or no reusability. While scripts seem simple to create and may be quick to deploy, they typically lack flexibility and are brittle, making them susceptible to breaking when things change. This can lead to regression gaps with abandoned scripts and other undesirable results. Test scripts are most suitable for simple, static workflows.

Conversely, test code involves writing code for test automation, ranging from unit tests to API tests to end-to-end tests. This approach often utilizes SDETs (Software Development Engineers in Test) or developers who know object-oriented principles, page object models, and data-driven testing principles. Typically needing fewer resources, it offers superior flexibility, robustness, and efficiency, making it ideal for complex projects with longer lifecycles or systems integral to an organization.

Why Does Code Quality Matter?

Code quality is the cornerstone of any reliable, robust, and efficient software system. Poor code quality can lead to bugs, security vulnerabilities, performance issues, and even system failures. On the other hand, high-quality code addresses these concerns while being easier to read, maintain, and extend, reducing the time and resources needed for debugging and shortening release cycles. High-quality code is good.

While the importance of production code quality is evident, test code quality is often overlooked, if not wholly ignored. We know what can happen with poor production code. Similarly, poor test code can lead to reduced regression testing, missed requirements, or even false positives or negatives, which are notoriously difficult to find. Good test code quality helps resolve these issues in the same way good production code quality helps resolve problems.

Remember, testing is your primary defense against software bugs and defects.

What’s More Important, Production Code Quality Or Test Code Quality?

Let me preface this with my experience across many large enterprise organizations: Code quality is challenging at the best of times, even with organizations having the best intent. And as large projects start running into delays, code quality is often an early victim.

A woman in an orange/red shirt thinking about a puzzling questionBack to the question: both production and test code quality are critical, but test code quality is more important than production code quality. Why, you ask?

Let’s answer that with another question. Which is better: a system with the best code ever but does not work as expected or a poorly written, unmaintainable system that does everything expected?  

We all agree that a system that works as expected is preferred, regardless of the quality of its code. 

The nature of development means bugs; the issue is how quickly you identify and fix them. The better you test, the quicker you find bugs. And the quicker you find bugs, the faster you fix them. So good testing is critical, and the more maintainable and sustainable your tests are, the quicker they can adapt as your system changes, letting you identify bugs sooner.  

Since testing ensures the quality of your production system, and production code quality does not guarantee functionality, test code quality is critical to the success of your test system and, ultimately, your production system, which needs to work, regardless of its code quality. 

Summary 

This article underscores the importance of code quality for both software development and testing. It further differentiates between automated test scripts and test code, accentuating the latter’s superiority. We also see that test code quality can compensate for deficiencies in production code quality while protecting against bugs, explaining how test code quality adds as much if not more, value than production code quality.   

Are you ready to challenge your views and see test code quality in a new light?

Author 

Kim Filiatrault Founder and President

Kim has been in the IT industry for over 35 years, most of it in highly technical areas including generative technologies and test automation. In 2005, Kim specialized in Insurtech and helped many enterprise customers implement large projects. More recently, his company released CenterTest, its new test automation technology, based on his decades of test automation and generative technologies expertise and they have started helping companies revolutionize the way they test.

When not working, Kim enjoys off-roading with his Jeep, going to the UT Longhorns Football games, serving his local communities, and traveling. 

Kimputing are an Exhibitor Sponsor at AutomationSTAR 20-21 Nov. 2023 in Berlin 

Kim Filiatrault

· Categorized: AutomationSTAR · Tagged: 2023, EXPO

Oct 26 2023

Speaker Interview: Geosley Andrades on his Eureka Moment, Advice for Testers & More

AutomationSTAR 2023 speaker Geosley Andrades shares his thoughts on test automation practices, and gives an insight into what you can expect at his talk, ‘Unlock New Possibilities in Test Automation with ChatGPT’.

1. How did you get into testing?

I embarked on my journey in the field of testing under somewhat serendipitous circumstances. Initially, I entered the realm of IT as a .NET developer. However, the year 2008 brought with it a challenging wave of economic recession, which prompted me to make a strategic career choice. It was during this pivotal moment that I decided to delve into the world of testing. Over the years, I have not only embraced this profession wholeheartedly but have also found immense passion and fulfillment in the journey it has offered me

2. What was your Test Automation Eureka moment?

As a relative newcomer to the field with just one year of experience, I was entrusted with the task of building my first test automation framework from scratch. To my astonishment, this framework not only benefited my team but was also adopted as the preferred choice across other teams within my organization. This transformative experience ignited my deep passion for test automation and set me on a course of continuous growth and innovation in the field.

3. If you had the power to change one widely accepted practice in testing, what would it be, and why do you think it needs to change?

I would advocate for a shift from primarily UI testing towards a more comprehensive approach, encompassing deeper layers like APIs, databases, and integrated systems. This change is necessary because, in many organizations, testing often remains superficial, focusing only on the UI, leaving underlying issues undiscovered until later stages.

4. Some argue that traditional testing methods are obsolete in the age of Automation. Do you believe there’s still value in manual testing, or is it a dying practice?

There is undoubtedly enduring value in human-driven exploratory testing, and I firmly believe that it is far from a dying practice. While automation tools have advanced significantly and streamlined many testing processes, they lack the critical element of human judgment, intuition, and creativity. Unlike tools, humans possess the cognitive capacity to evaluate, recognize patterns, make nuanced decisions, think critically, interpret results, observe subtle issues, and adapt to evolving scenarios. As a result, human-driven exploratory testing remains a vital component of the testing landscape, complementing Automation and AI-driven techniques. It ensures the discovery of complex and context-specific defects that can be challenging for tools to identify.

5. What can attendees expect to gain from your presentation or workshop at the AutomationSTAR conference?

As AI’s prominence continues to grow, it’s a consensus that AI won’t replace testers, but testers who harness AI’s power definitely will. In my presentation, we will embark on a journey to uncover innovative approaches through which ChatGPT (Generative AI) can empower testers and augment their skill set. Furthermore, I will delve into the significance of educating ourselves about this Generative AI technology and adopting it as an invaluable tool, reframing it from a perceived threat. The moment has arrived for us to embrace AI and redirect our focus toward the facets of our roles that deliver the most value.

6. What’s the biggest message you have for AutomationSTAR Attendees?

With its dedicated focus on Automation, AutomationSTAR offers a unique opportunity for everyone in the testing community. Featuring an array of automation-related topics presented by world-class speakers, the conference promises a wealth of knowledge and networking opportunities. As a speaker myself, I’m eagerly anticipating not only sharing insights but also learning and connecting with the automation community. I encourage all of you to participate in this celebration of our craft and leverage this event to propel your test automation careers to new heights. See you there!

It’s your last chance to get tickets to the AutomationSTAR Conference in Berlin, 20-21 November. The enthusiasm from the community is incredible – tutorials are sold out, and our new Conference Only tickets are being snapped up! Get your tickets now.

· Categorized: AutomationSTAR · Tagged: 2023

Oct 26 2023

How To Turn Secure Planning into Secure Delivery?

It’s the year 2000. The millennium problem had just been conquered and mobile phones were only used by big shots. I had just graduated and worked on some small local projects, when the opportunity came along to join a major project for a large international company. While I was quickly working on my English vocabulary and pronunciation, I took my first steps in the world of SAP and a whole landscape of connected applications. I started working together with team members from different parts of the world. Since interaction was only done via e-mail and phone calls, it was an exciting outlook to meet people in real life when after one year of working from a distance a central on-site user acceptance test was planned.

The test activities should be executed in Atlanta, Georgia. It was my first traveling to the US and even my first time flying in general. Since I wanted to be fit and well prepared on the Monday morning, I travelled two days early. So did other colleagues, and on Sunday I met some of the people that I recognized from the voices on the many phone calls. In the evening there were already around 30 people from Asia, Europe, and the America’s, all travelled around and prepared to start the acceptance testing.

That Monday morning the kick off started punctually at 9am. Introductions, instructions and test scripts were dealt with and laptops and desktops were switched on. After the first coffee rumor spread some people couldn’t connect to one of the main test systems. I tried myself and strange enough I had the same problem. Shortly after 11am it appeared nobody could connect, and it became very crowded in the coffee corner. The test manager was making loud calls and busy conversations and looked kind of stressed. Around noon it appeared that the main test system was down for planned maintenance and according to planning it would only be back on Wednesday evening.

There we were, 30 people travelling in their weekend from all over the world, staying in hotels and being together for one week to work in a single room somewhere in the world. Unfortunately the first three days of the week we couldn’t do anything because of an unknown planning conflict with another team. A big disappointment for the world travelers, and even worse for the project and the company.
The experience from the year 2000 always stayed in my mind and now that I have work experience in many other companies I can conclude that those kind of issue keep popping up at times. Projects and teams have secure individual plannings but are not fully aware of conflicts with other projects and activities. Many times that lack of awareness leads to unexpected system unavailability, which in turn leads to running out of plannings and deadlines. From those experiences the idea arose to develop software to help companies getting central insight into the availability of systems in their system landscape. This idea was turned into an actual development project in 2018 when one of our customers was looking for tooling in the market but couldn’t find anything. The first version of ERMplanner was born.

Today my company is working on version 2.7 of ERMplanner. To complete the circle, we recently had contact with the company were it all started in the year 2000. Some of the people from the Atlanta test week are still around and believe it or not, similar issues are still occurring today. The company is very interested in the tooling that we have built over the years. Two weeks ago we had an implementation workshop and in a few weeks a pilot is started to work with ERMplanner and get better insight in all the activities affecting their system availability. A successful implementation in this company would be the crowning glory of our work and I could even think of retiring.

Author

Ronald Vreugdenhil, Founder of ERMplanner

Ronald Vreugdenhil studied Computer Science and worked as a consultant in the SAP logistics and workforce management areas. He has over 20 years of national and international project experience.

Since 2009 he is co-owner of PeachGroup, helping organizations to improve their service and maintenance processes. In 2017 he founded ERMplanner. ERMplanner is standard software to turn your release planning into reliable deliveries. It prevents conflicts between individual schedules of release-, change-, project- and test managers so that all planned work can be carried out according to schedule.

ERMplanner are a Gold Sponsor at AutomationSTAR 20-21 Nov. 2023 in Berlin

· Categorized: AutomationSTAR · Tagged: 2023, EXPO

Oct 24 2023

Allure Report Is More Than A Pretty Report

Behind the pretty HTML cover of Allure Report is the idea that QA should be the responsibility of the entire team, not just QA – which means that test results should be accessible and readable by people without the QA or dev skill set. Report allows you to move past the details that don’t help you, staying at your preferred level of abstraction – and yet if you do need to drill into the code, it’s just a few mouse clicks away.

Report achieves this basic goal by being language-, framework-, and tool-agnostic. It can hide the peculiarities of your tech stack because it doesn’t depend upon it. So how does one become agnostic? You can’t do it through magic, you have to write tons of integrations, literally hundreds of thousands lines of code to integrate with anything and everything. Allure Report is a hub of integrations, and its structure is designed specifically with the purpose of making new integrations easier.

Let’s imagine that we’re writing a new integration for Report, and look at what resources we can leverage to make our job easier. We will be comparing how much effort we need to apply with Report and with other tools. We will start with the most straightforward advantages – the existing codebase; and then talk about more fundamental stuff like architecture and knowledge base.

Selenide native vs Selenide in Allure Report

To begin with, let us compare native reporting for Selenide with the way Selenide is integrated in Allure Report, and then see how difficult it was to write the integration for Report.

While creating simple reporting for Selenide is relatively easy, it’s a completely different story if you want to make quality test reports. In JUnit, there is only one extension point – the exception that is being thrown on test failure. You can jam the meta-information for the report into that exception, but working with this information will be difficult.

By default, Selenide and most other tools take the easy road. When Selenide reports on a failed test, what you get is just the text of the exception, a screenshot, and the HTML of the page at the time of failure:

If you’re the only tester on the project and all the tests are fresh in your memory, this might be more than enough – which is what the developers of Selenide are telling us.

Now, let’s compare this to Allure Report. If you run Report on a Selenide test with nothing plugged in, you’ll get just the text of the exception, same as with Selenide’s report.

But, as I’ve said before, the power of Allure Report is in its integrations. Things will change if we turn on allure-selenide and an integration for the framework you’re using (in this case – allure-junit). First (this is specific to the Selenide integration), we’re going to have to add the following line at the beginning of our test (or as a separate function with a @BeforeAll annotation):

SelenideLogger.addListener(“AllureSelenide”, new AllureSelenide());

Now, our test results have steps in them, and you can see precisely where the test has failed:

This can help you figure out why the test failed (whether the problem is in the test or in the code). You also get screenshots and the page source. Finally, with these integrations, you can wrap the function calls of your test inside the step() function or use the @Step annotation for functions you write yourself. This way, the steps displayed in test results will have custom names that you’ve written in human language, not lines of code. This makes the test results readable by people who don’t write Java (other testers, managers etc.). Adding all the steps might seem like a lot of extra work, but in the long run it actually saves time, because instead of answering a bunch of questions from other people in your company you can just direct them to test results written in plain English.

This is powerful stuff compared to what Selenide (and most other tools) offer as default reports. So here’s the main question for this article: how much effort did it take to achieve this? The source code for the allure-selenide integration is about 250 lines long. Considering the functionality that this provides, that’s almost nothing. Writing such an integration would probably be as easy as providing the bare exception that we get if we use Selenide’s native reporting.

This is the main takeaway: a proper integration with Allure Report takes about as much effort as a quick and easy integration with other tools (provided we’re talking about a language where Report has an established code base, such as Java or Python). How is that possible?

Common Libraries

The 250 lines of code in allure-selenide leverage files with about 500 lines of code from the allure-model section of allure-java, and about 1300 lines from allure-java-commons. These common libraries have been created to ease the process of making new integrations – and there are more than a dozen for Java alone that utilize these common libraries.

Writing these libraries is not a straightforward task. There are problems of execution here that can be extremely difficult to solve. For instance, when writing the allure-go integration, Anton Sinyaev spent several months solving the issue of parallel test execution (an issue which was left unsolved for 8 years in testify, the framework from which allure-go was forked). Such problems can be unique for a particular framework, which makes writing common libraries difficult. Generally speaking, once the process has been smoothed out, writing an integration for a framework like JUnit might take a month of work; but if there are no common libraries present, you could be looking at 4 or 5 months.

The JSON with the results

Let’s go deeper. What if we’re writing an integration for an entirely new language? Since the language is different, none of the code can be reused. Here, the example with Go is particularly telling, since it is quite unlike Java or Python, both in basic things like lack of classes, and in the way it works with threads. Because of this, not only was it not possible to reuse the code, but even the general solutions couldn’t be translated from one language to another. Then what HAS been reused in that case?

Arguably the most important part of Allure Report is its data format, the JSON file which stores the results of test runs. This is the meeting point for all languages, the thing that makes Allure Report language-agnostic. Designing that format took about a year, and it has incorporated some major architectural decisions – which means if you’re writing a new integration, you no longer have to think about this stuff. Thanks to this, the first, raw version of allure-go was written over a weekend – although it took several months to solve problems of execution and work out the kinks.

Experience

Finally, there is the least tangible asset of all – experience. Writing integrations is a peculiar field of programming, and a person skilled in it will be much more productive than someone who is just talented and generally experienced. If one had to guess, it would probably take 10 people about 2–3 years to re-do the work that’s been done on Allure Report, with one developer for each of the major languages and its common libraries, 2 or 3 devs for the reporter itself, an architect, and someone to work with the community.

Community

Allure Report’s community is not an asset strictly speaking, but when creating a new integration, it actually provides an extremely important role in several ways.

  1. DEMAND. As we’ve already said, adding test reporting to a framework or a tool can take months of work if done properly. If you’re doing this purely for your own comfort, you’ll probably cut a lot of corners, do things quick and dirty. If, on the other hand, you’re working on something that is going to be used by millions of people, that’s motivation enough to sit around for an extra month or two and provide, say, proper parallel execution of tests.
  2. EXPERIENCED DEVELOPERS. Here, we’re kind of returning to the previous section: the open-source nature of the tool allowed Qameta to get in touch with plenty of developers experienced in writing integrations, and hire from that pool.
  3. THE INTEGRATIONS THEMSELVES. Allure report didn’t start out as a tool designed to integrate with anything and everything – the first version was just built for Junit 4 and Python. Pretty much everything outside allure-java and allure-python was initially developed outside Qameta, and then verified and internalized by the company.

All of this has been possible because there are many developers out there for whom Allure Report is a default tool – they are the bedrock of the community.

Conclusion

The structure of Allure Report didn’t appear all at once, like Athena did from the head of Zeus. It took many years of thinking, planning, and re-iterating on community feedback. What emerged as a result was a tool that was purpose-built to be extensible and to smooth out the creation of new integrations. Today, expanding upon this labor means leveraging the code, experience and architectural decisions that have been accumulated over the years.

If you’d like to learn more about Allure Report, we’ve recently created a dedicated site. Naturally, there’s documentation, as well as detailed info on all the integrations (under “Modules”). See if you can find your language and test framework there! And we’re planning to add much more stuff in the future, like guides, so don’t be a stranger and pay us a visit.

Author

Artem Eroshenko

CPO and Co-Founder of Qameta Software

Qameta Software are a Gold Sponsor at AutomationSTAR 20-21 Nov. 2023 in Berlin

· Categorized: AutomationSTAR · Tagged: 2023, EXPO, Gold Sponsor

  • Page 1
  • Page 2
  • Go to Next Page »

Copyright © 2025 · Impressum · Privacy · T&C