• Skip to main content
Programme Launch: Save 25% by 30th June – Prices Rise After!

AutomationSTAR

Test Automation Conference Europe

  • Programme
    • Survey Results
    • 2025 Programme
  • Attend
    • 2025 Location
    • Bring your Team
    • 2024 Photos
    • Testimonials
    • Community Hub
  • Exhibit
    • Partner Opportunities
    • Download EXPO Brochure
  • About Us
    • FAQ
    • Blog
    • Test Automation Patterns Wiki
    • Code of Conduct
    • Contact Us
  • Tickets

AutomationSTAR

Jul 26 2024

Get the AutomationSTAR Glow Up

Your team works hard in your day-to-day roles. What if you had an opportunity to shine even brighter & take things to the next level? Here it is! Get the AutomationSTAR glow up – a space to come together in-person, get renewed energy, a united vision, and LOTS of new ideas and solutions.

This year is bigger: join test automation engineers, QA and developers from around the globe for 2 days of inspiration, learning, and networking.

With 36 sessions led by 40+ expert speakers, the possibilities are limitless. Your team can work together to stay ahead on what’s coming up next in test automation.

See ticket options for teams – 3-for-2 Offer currently available.

Benefits of Attending Together

Global Perspectives

World class speakers are delivering interactive sessions to bring the team together. Split sessions to maximize the learnings, ask questions in a focused setting, absorb lots of new ideas, and get actionable next steps on your projects.

A United Vision

Meet other teams from all over Europe, get advice, and maximize your team’s professional development journey. It’s an opportunity to work as one, however your role is connected to test automation.

Total Immersion

It’s a total immersion! 2 whole days, no office distractions. Get your heads together, practice hands-on at deep dives and tutorials, and focus on all the topics you need to help your work.

New Friendships

Broaden your connections with lots of corridor conversations, chats over coffee, a dedicated networking party, Speed Meets, games, giveaways, and so much more. Create a stronger bond with your teammates and get some cool swag along the way!

What Your Peers Say

“A powerhouse of insights! Explored lots of topics & meaningful discussions.”

– Robert, NIBC Bank

“A fantastic environment for learning and collaboration.”

– Harsha, Coosto

“Incredible! I learnt a lot about how to improve, evolve, and innovate.”

– Paulo, Mindera

Train Together, Stay (Ahead) Together!

Knowledge really is best when it’s shared. Teams who train together see an increase in productivity when they return to work. When you put your heads together in an inspired setting, magic happens! Here are just some of the biggest takeaways our teams enjoy from AutomationSTAR are:

  • Learning about cutting edge technologies
  • Getting a hands-on training experience
  • Improving on best practices
  • Implementing new ideas
  • Connecting with like-minded people

Ready for your team to shine even brighter? Book your AutomationSTAR tickets and join us for a 2-day celebration of test automation.

· Categorized: AutomationSTAR

Jul 22 2024

GenAI, or the triumph of a digital steam engine

If you compare the history of Generative AI with the development of the steam engine, you find surprising parallels. According to Wikipedia, the first steam engine patent was issued in 1606. This was followed by a century and a half of further development until James Watt achieved the breakthrough at the end of the 18th century. At first, he “leased” his steam engine as a service, but then – from 1800 onward – others developed their own machines.

The field of artificial intelligence (AI) has also been around for almost 70 years. Again, the beginnings were slow. It was not until 1997 that Deep Blue succeeded in beating the world chess champion Kasparov. The absolute breakthrough came in 2017 when Google presented the Transformer architecture. Within a very short space of time, the so-called “Large Language Models” (LLM) sprang up like mushrooms. When OpenAI made ChatGPT available to the public free of charge on November 30, 2022, the newest industrial revolution started. Today, most companies are dreaming of huge savings thanks to generative AI.

But is it realistic to believe that AI will do all the work for us? Isn’t it threatening to explode in on us like the steam engine did? Let us dive deeper into the topic.

What can we realistically expect from AI?

Trying to predict the future is certainly a foolish attempt. Nobody knows how we will use AI in ten years, but one thing is for sure: AI will change our life. The number of AI-based tools that are currently being developed seems to be almost unlimited. It will take some time for the wheat to be sorted from the cha.

Today, the various existing LLMs already provide us with the means to use AI in testing. The use cases are diverse:

  • Get a quick overview of the features to be tested to start the test analysis more easily,
  • Identify parameters and equivalence partitions and create test cases,
  • Translate test cases into a specific format (e.g. Gherkin),
  • Create test scripts,
  • Generate test data,
  • Reduce / optimize existing test cases, and many more.

If you are familiar with generative AI and formulate your prompt skilfully, you will get amazing results. However, it is important to keep in mind that AI is not intelligent in the sense that it really understands what it is saying. The Transformer algorithm only searches for semantic similarity. This produces astonishingly convincing answers, but the AI has not thought things through. We should therefore be wary of seeing AI as a replacement for an employee.

How to write structured prompts

A well structured prompt has six elements: context, role, instruction, constraints (if applicable), and output format. Take the following example:

Since LMMs only “think” in semantic similarities, it is helpful to repeat words and to provide examples. The more terms in the prompt point in the right direction, the better the answer will be.

In the example above, we find all elements of a structured prompt:

  • Context: test of a specific web application using Gherkin scenarios
  • Role: assistant for writing Gherkin test scenarios
  • Instructions: , determine the equivalence classes, create Gherkin scenarios, check coverage
  • Constraint: one file *.feature
  • Output format: determined by the output temple

Sometimes, the AI is going in the wrong direction. For example, it may ignore the invalid equivalence partitions or disregard the output format. In that case it helps to provide examples. This prompting technique is called “n-shot prompting” where n stands for the number of examples. If your examples contain scenario names of a specific format, the AI is most likely to answer with similar names.

The role of the role

We started the prompt with “You are my helpful assistant”. This was probably not even necessary because it is already covered by the system prompt. The system prompt is the prompt sent by the chatbot interface prior to our own prompt. The system prompt of ChatGPT4.0 was discovered not so long ago using the following prompt:

It is enormous, trying to cover all potential request, but also adversarial attacks.

Still, starting the prompt with “You are my helpful assistant” has several advantages. On the one hand, it not only reminds the AI of its role, but also us. Because the eloquence of the answers should not tempt us to forget the basic principle. AI does not think for itself! On the other hand, it puts the AI in a “be helpful” mood. This may sound ridiculous, but the jailbreaking community recently discovered that adversarial attacks are more probable to succeed if you force the LLM into producing affirmative responses. The easiest way to do this is to end the prompt with “Respond with ‘Sure, here is how to…”. (For more details, please refer to this article.)

Keep the risks in mind!

Data security is probably the most obvious problem when using Generative AI for testing purposes. Since it is very tempting to use GenAI, your organization should consider a solution as soon as possible. It will probably come down to operating an in-house LLM.

Next comes copyright issues, especially if you use AI for coding purposes. I am not a lawyer and therefore will not venture into this area, but I strongly recommend consulting an expert.

Ecological risks are often mentioned, but as with waste avoidance, knowing about them does little to change our behaviour. Just as a rough indicator: Each generated image corresponds to one complete recharge of a smartphone! Therefore, I am convinced that we are well advised to develop healthy reflexes right from the start:

  • Limit the use of Generative AI to tasks that are really helpful. For example, I quickly gave up asking ChatGPT to write conference abstracts for me. Instead, I ask it to assume the role of an English teacher and to correct my homework.
  • If possible, use smaller LLMs. Even if you will probably get the best results with ChatGPT4, smaller models like Mistral-Tiny may also do the specific job.
  • Create text rather than images. For example, it is possible to create UML models using Mermaid.js or Plant UML.

Of all AI types, generative AI is the AI that needs the most electricity and water. Often enough, there are ways of solving the same task with deep learning algorithms or perhaps even without AI altogether.

Stay tuned!

There is a lot more to say about AI, and as you read this, new use cases may already be emerging. But one thing is certain: only those who know their way around will be able to make full use of the new possibilities. Smartesting therefore offers training specifically for testers. In an exceptionally practice-oriented 2-day course, participants learn:

  •  what Generative AI brings to software testing,
  • how to obtain good results by applying prompting techniques,
  • how to detect and mitigate risks related to the use of Generative AI,
  • what possibilities exist beyond the chat mode and
  • what to consider before introducing Generative AI into your organization.

The aim is to use AI to accelerate test analysis/design, test suite optimization, test automation and test maintenance activities. Smartesting’s LLM Workbench used during the training course allows you to access 14 LLMs of different size and operators (openAI, Mistral, Meta, Anthropic and Perplexity) and to compare their answers directly. The training is available in French, English and German. Contact us, if you want to know more.

Author

Anne Kramer AutomationSTAR Speaker


Anne Kramer
, Global CSM, Smartesting

During her career spent in highly regulated industries (electronic payment, life sciences), Anne has accumulated exceptional experience of IT projects, particularly in their QA and testing dimension. An expert in test design approaches based on visual representations, Anne is passionate about sharing her knowledge and expertise. In April 2022, she joined Smartesting as Global CSM.

Smartesting is an EXPO Exhibitor at AutomationSTAR 2024, join us in Vienna.

· Categorized: AutomationSTAR · Tagged: 2024, EXPO

Jul 15 2024

Low code test automation with TestBench and Robot Framework 

n today’s software development, quality assurance is a critical success factor. One of the biggest challenges here is the frequent discrepancy between the technical test specification and the test automation. These tasks are usually implemented in different tools, which leads to an incomplete overview of the test process, divergent specifications and implemented automations. This media discontinuity also makes coordination between test designers and test automation experts more difficult. 

Our goal with the integration of TestBench and Robot Framework is to overcome these challenges and enable a seamless, efficient testing process. TestBench manages keywords in a centralised keyword repository, which simplifies maintenance and ensures consistency. The use of Data Driven Testing in TestBench significantly increases test coverage as tests with different data variations can be easily performed. 

With Keyword-Driven Testing (KDT) and Data Driven Testing (DDT) in TestBench, we significantly increase the efficiency of the testing process. Test designers can create new automated test sequences with a low-code approach in TestBench, which not only lowers the barrier for test automation, but also enables faster and more precise implementation. This integration leads to harmonised collaboration between test designers and automation engineers and enables effective and comprehensive quality assurance. 

Speed up test specification with Keyword-Driven Testing  

Keyword-Driven Testing is a test specification method in which test cases are defined by using keywords that represent specific actions. These keywords abstract the test logic, allowing test cases to be created independently of the actual automation technology. This makes it easier for non-programmers to create and maintain tests, as the keywords are formulated in a language that can be understood by specialised users. KDT promotes the reusability and maintainability of tests, as keywords can be used in different test cases once they have been defined. If keywords need to be adapted, this is only necessary at one central point and has a direct effect on all test cases in which the keyword is used. 

Maximising test coverage through Data-Driven Testing 

Data-Driven Testing (DDT) is a test automation technique in which the same test cases are executed with different input data. This method enables efficient scaling of tests as the test logic is developed only once, while different data variations can be tested. This is done by providing the input data from external sources such as files, databases or tables. DDT increases test coverage and detects potential errors that can be caused by different data combinations without having to change the test scripts themselves. 

The synergy of Robot Framework and TestBench 

Keywords created in TestBench are converted into executable automation steps by automation specialists. No complete test cases are automated, only the atomic technical steps represented by the keywords. These atomic steps form the building blocks that can be reused in different test cases. This promotes the reusability and maintainability of the automated tests, as the basic actions can be implemented once and used in different contexts. 

We use the Robot Framework for automation, which fits perfectly with TestBench’s Keyword-Driven Test approach. As an open-source tool, Robot Framework offers a flexible and extensible framework that makes it possible to test a wide variety of technologies. By using 3rd party libraries to control the system under test, end-to-end tests can be realised across different applications. This significantly increases flexibility and adaptability. The combination of TestBench and Robot Framework maximises the efficiency and effectiveness of the test process. 

Summary: How TestBench and Robot Framework are revolutionising test automation 

In our solution, test cases are technically specified in TestBench and converted into Robot Framework test cases for automated execution. The entire planning and management takes place in TestBench, which also enables specialised test designers to compile automated tests from existing steps. The clear interface between test designers and test automation specialists simplifies collaboration considerably, allowing both to focus on their respective strengths – technical expertise and programming. 

Automated test execution can be triggered either manually by a human or through integration into CI/CD pipelines. The test results are imported and displayed in TestBench, including the results from manually executed tests. TestBench also offers a special wizard, the ‘iTORX’, for the manual execution of tests. 

A major advantage is the simple traceability from requirements to test results. This supports a comprehensive and transparent test strategy.  

Author

Falk-Atrock

Falk Atrock, IT consultant

Falk Atrock is an experienced IT consultant with over ten years of experience in Quality Assurance. He has been with imbus AG for seven years, focusing on client projects primarily in test management and test automation. Falk has played a pivotal role in implementing TestBench and test automation solutions for clients. He is also an experienced trainer, conducting workshops and webinars on TestBench and Robot Framework. Since August 2023, Falk has been the Product Owner for TestBench, steering the product’s development and strategic direction.

TestBench is an EXPO exhibitor at AutomationSTAR 2024, join us in Vienna

· Categorized: AutomationSTAR · Tagged: 2024, EXPO

Jul 08 2024

Allure Report Hands-on Guide

Allure Report is an open-source multi-language test reporting tool. It builds a detailed representation of what has been tested and extracts as much information as possible from everyday test execution.

In this guide, we’ll take a journey through the main steps toward creating your first Allure report and discover all the fancy features it brings to routine automated testing reports.

Since Allure Report has various integrations with various testing frameworks on different programming languages, there is a chance that some steps will vary for each reader, so feel free to jump into the official documentation page for details.

Installing Allure Report

As always, the first step is to install the Allure library. The exact steps vary depending on your OS:

Homebrew (for macOS and Linux)

For Linux and macOS, automated installation is available via Homebrew

brew install allure

Scoop (for Windows)

For Windows, Allure is available from the Scoop command-line installer.

To install Allure, download and install Scoop, and then execute the following command in Powershell:

scoop install allure

System package manager (for Linux)

  1. Go to the latest Allure Report release on GitHub and download the allure-*.deb or allure-*.rpm package, depending on which package format your Linux distribution supports.
  2. Go to the directory with the package in a terminal and install it.

For the DEB package:

sudo dpkg -i allure_2.24.0-1_all.deb

For the RPM package:

sudo rpm -i allure_2.24.0-1.noarch.rpm

NPM (any system)

  1. Make sure Nodejs and NPM are installed.
  2. Make sure Java version 8 or above is installed and its directory is specified in the JAVA_HOME environment variable.
  3. In a terminal, go to the project’s root directory for which you want to use Allure Report. Run this command:
npm install --save-dev allure-commandline

This installation method only makes Allure Report available in the given project’s directory. Also note that the commands for running Allure Report must be prefixed with npx, for example:

npx allure-commandline serve

From an archive (any system)

  1. Make sure Java version 8 or above is installed and its directory is specified in the JAVA_HOME environment variable.
  2. Go to the latest Allure Report release on GitHub and download the allure-*.zip or allure-*.tgz archive.
  3. Uncompress the archive into any directory. The Allure Report can now be run using the bin/allure or bin/allure.bat script, depending on the operating system.

With this installation method, the commands for running Allure Report must contain the full path to the scripts, for example:

D:\Tools\allure-2.24.0\bin\allure.bat serve

Check the installation

Once you’ve followed one of the above steps, it’s a good idea to check if Allure is now available on your system:

$ allure --version
2.24.0

Plugging Allure Report into code

The next step is to provide all the necessary dependencies in a configuration file so that your build tool can use Allure.

Each framework and build tool has its own configuration settings, so the best way to get a clue on adding dependencies is to look for an example at the documentation page or get an example at GitHub.

Adding annotations to the report

After plugging Allure Report into the codebase, we can run it. You will get a pretty report, though it won’t have much to tell (at least compared to its full potential):

We need to annotate the tests to provide all the necessary data to Allure. There are several types of annotations:

  • Descriptive annotations that provide as much information as possible about the test case and its context
  • The step annotation, the one that allows Allure to build nice and detailed test scenarios
  • Parameterized annotations that can accept values from the test’s own inputs

Descriptive annotations

Let’s go over the existing annotations one by one.

@Epic, @Feature, @Story:

This is a set of annotations designed to make test-case tree grouping more flexible and informative. The annotations follow the Agile approach for task definition. These annotations may be implemented on the class or on the method level.

Epic defines the highest-level task that will be decomposed into features. Features will group specific stories, providing an easily readable structure.

As a story is the lowest part of the epic-feature-story hierarchy, the class-level story adds data to all class methods.

@Description

An annotation that provides a detailed description of a test method/class to be displayed in the report.

@Owner

A simple annotation to highlight the person behind the specific test case so that everyone knows whom to ask for a fix in case of a broken/failed test. Quite useful for large teams.

@Severity

In Allure, any @Test can be defined with a @Severity annotation that accepts any of the following values:

  • SeverityLevel.BLOCKER
  • SeverityLevel.CRITICAL
  • SeverityLevel.NORMAL
  • SeverityLevel.MINOR
  • SeverityLevel.TRIVIAL

The severity level will be displayed in the report so that the tester understands how serious the problem is if a test has failed.

Sample Tests

Let’s take a look at an example of these annotations in Java (annotations for any other language will look similar):

public class AllureExampleTest {

    @Test
    @Epic("Sign In flow")
    @Feature("Login form")
    @Story("User enters the wrong password")
    @Owner("Nicola Tesla")
    @Severity(SeverityLevel.BLOCKER)
    @Description("Test that verifies a user cannot enter the page without logging in")
    public void annotationDescriptionTest() {}

    /**
     * JavaDoc description
     */
    @Test
    @Description(useJavaDoc = true)
    public void javadocDescriptionTest() {}
}

The Step annotation

Detailed reporting with steps is one of the features people love about Allure Report. The @Step annotation makes this feature possible by providing a human-readable description of any action within a test. Steps can be used in various testing scenarios. They can be parametrized, make checks, have nested steps, and create attachments. Each step has a name.

To define steps in code, each method should have a @Step annotation with a String description; otherwise, the step name equals the annotated method name.

A step can extract the names of fields using reflection so that they can be used to, e.g., provide the name of the step.

Here are several examples of what the Step annotation looks like in code:

package io.qameta.allure.examples.junit5;

import io.qameta.allure.Allure;
import io.qameta.allure.Step;
import org.junit.jupiter.api.Test;

public class AllureStepTest {

    private static final String GLOBAL_PARAMETER = "global value";

    // A test inside which a @Step-annotated method is used
    @Test
    public void annotatedStepTest() {
        annotatedStep("local value");
    }

    // A test with a step implemented using a lambda
    @Test
    public void lambdaStepTest() {
        final String localParameter = "parameter value";
        Allure.step(String.format("Parent lambda step with parameter [%s]", localParameter), (step) -> {
            step.parameter("parameter", localParameter);
            Allure.step(String.format("Nested lambda step with global parameter [%s]", GLOBAL_PARAMETER));
        });
    }

    // The methods that can be used as steps
    @Step("Parent annotated step with parameter [{parameter}]")
    public void annotatedStep(final String parameter) {
        nestedAnnotatedStep();
    }

    @Step("Nested annotated step with a global parameter [{this.GLOBAL_PARAMETER}]")
    public void nestedAnnotatedStep() {

    }

Parameterized annotations

@Attachment

This annotation allows attaching a String or Byte array to the report. It is very helpful if you need to show a screenshot or a failure stack trace in your test results.

@Link

It’s just as the name suggests: if you need to add a link to the test, be it a reference or a hyperlink, this is the annotation you need.

It takes several parameters:

  • name: link text
  • url: an actual link
  • type: type of link
  • value: similar to name

@Muted

An annotation that excludes a test from a report.

@TmsLink

A way to link a result with a TMS object, if you use any. Allows entering just the test case ID that will be added to the pre-configured (via allure.link.tms.pattern) URL. The annotation takes a String value, the link to the management system. For example, if the link to our test case on the TMS is https://tms.yourcompany.com/browse/tc-12, then we can use tc-12 as the value.

Running Allure Report

Local launch

Running Allure Report locally is a great way to get started with it. However, remember that local execution does not provide execution, result history, or trend graphs.

Generally speaking, the easiest way to try Allure Report is to download a pre-made empty project from Allure Start. There, you can select any tech stack you want; download the project, add some sample tests, and build it.

However, here, we’re going to go with an example that already has some tests in it, because we want to show you Report’s features. You can follow along by downloading the code from the GitHub link. The example uses JUnit 5 and Gradle.

Once you’ve downloaded the project and built it with Gradle, you can run the tests with the./gradlew test command. As soon as they have been executed, Gradle will store the test results in the target directory.

Let’s take the data and build a report! With the allure serve /path/to/allure-results command, we start an Allure Report instance (the allure-results folder is usually in the build folder in the root of your project). It builds a local web report which automatically opens as a page:

CI (Jenkins, TeamCity, and Bamboo)

Instead of running Report locally, you can generate on a CI server. Allure Report has great integrations with multiple CI systems. Each system setup has specificities, and we won’t cover them all in this post; you can follow the Documentation page for steps to create a report with a CI system (e. g. Jenkins).

Allure Report Features

Now that we’ve set up the basic functionality, you can build upon it with other features of Allure Report.

Attachments

Often, it’s not enough to read the list of executed steps; you need to closely examine the system under test. Allure Report allows you to automatically gather all kinds of data about the system you’re testing – take screenshots, gather webpage source code, etc. For this, we either leverage the existing functionality of your framework or create this functionality from scratch. Once collected, the data is attached to your report:

This way, you’ve got exhaustive information necessary to diagnose and reproduce bugs.

Integrations

Allure Report is polyglot: it works with pretty much any popular framework. The architecture of Report has been designed to simplify the process of making integrations, and we’ve put a lot of thought and effort into creating them.

A full list of integrations is available on our website. Detailed instructions for different frameworks are available in our documentation (for Java, Python, etc.).

The integrations, together with the steps, help hide any technical details of your tests (such as programming language and test framework), so the interface presents the test scenario in a form familiar to manual testers or managers.

Categories

Categories are one of the most time-saving features of Allure Report. They provide simple automation for fail resolution. There are two categories of defects by default:

  • Product defects (failed tests)
  • Test defects (broken tests)

Categories are fully customizable via simple JSON configuration. To create custom defects classification, add a categories.json file to the allure-results directory before report generation.

  • Open JSON template [ {"name": "Ignored tests", "matchedStatuses": ["skipped"] }, {"name": "Infrastructure problems", "matchedStatuses": ["broken", "failed"], "messageRegex": ".*bye-bye.*"}, {"name": "Outdated tests", "matchedStatuses": ["broken"], "traceRegex": ".*FileNotFoundException.*"}, {"name": "Product defects", "matchedStatuses": ["failed"] }, {"name": "Test defects", "matchedStatuses": ["broken"] } ]

The JSON includes the following data:

  • (mandatory) Category name
  • (optional) list of suitable test statuses. The default ones are: [“failed”, “broken”, “passed”, “skipped”, “unknown” ]
  • (optional) regex pattern to check the test error message. Default value: “._”
  • (optional) regex pattern to check the stack trace. Default value: “.”

A test result falls into a category if its status is in the list and both the error message and the stack trace match the pattern.

If you’re using allure-maven  or allure-gradle plugins, categories.json  file can be stored in the test resources directory.

Parameterized tests

Allure knows how to work with parameterized automated tests. Let’s take a JUnit test as an example. First, let’s create a test class with a parameterized test:

@Layer("rest")
@Owner("baev")
@Feature("Issues")
public class IssuesRestTest {

    private static final String OWNER = "allure-framework";
    private static final String REPO = "allure2";

    private final RestSteps steps = new RestSteps();

    @TM4J("AE-T1")
    @Story("Create new issue")
    @Microservice("Billing")
    @Tags({@Tag("api"), @Tag("smoke")})
    // The important annotation:
    @ParameterizedTest(name = "Create issue via api")
    @ValueSource(strings = {"First Note", "Second Note"})
    public void shouldCreateUserNote(String title) {
        steps.createIssueWithTitle(OWNER, REPO, title);
        steps.shouldSeeIssueWithTitle(OWNER, REPO, title);
    }

After execution, Allure provides the parameterized test run results as a set of tests, with the value of the parameter specified in the overview of each test:

If any tests fail, Allure provides detailed information about that particular case.

Retries

Retires are executions of the same test cases (a signature is calculated based on the test method name and parameters) within one test suite execution, e.g., when using TestNG IRetryAnalyzer or JUnit retry Rules. Unfortunately, this is not supported for local runs.

Test History

Allure Report supports history for tests. At each report generation during the build, the Allure Plugin for Jenkins will try to access the working directory of the previous build and copy the contents of the allure-report/history folder to the current report.

Currently, the history entry for the test case stores information for up to 5 previous results, and it is not available for local runs.

Report Structure and Dashboards

Now, let’s go over the structure of a report, as it is presented in the main menu:

Overview

The default page would be the ‘Overview’ page with dashboards and widgets. The page has several default widgets representing the essential characteristics of your project and test environments:

  • Statistics – overall report statistics.
  • Launches – statistics by launch, provided that the report is based on multiple launches.
  • Behaviors – information on results aggregated according to stories and features.
  • Executors – information on test executors used to run the tests.
  • History Trend – if tests accumulate some historical data, a trend will be calculated and shown on the graph.
  • Environment – information on the test environment.

Home page widgets are draggable and configurable. Also, Allure supports its own plugin system, so you can have very different widget layouts.

Categories

This page shows all defects. Assertion failures are reported as ‘Product defects’, whereas failures caused by exceptions are shown in the report as ‘Test defects’.

Suites, Behaviors, and Packages

Three tabs that show the test case tree with tests grouped by:

  • Suites. This tab displays test cases based on the suite executed.
  • Behaviors. Here, the tree is based on stories and feature annotations.
  • Packages. In this tab, the test cases are grouped by package names.

Graphs

On this tab, test results are visualized with charts. The default configurations provide:

  • A pie chart with general execution results.
  • A duration trend of test case execution. This is a nice feature if you need to investigate which tests require more time and optimization.
  • The retries trend shows how tests are re-executed during a single test run.
  • The categories trend shows the categories of defects encountered.

Timeline

This view displays the timeline of executed tests.

Conclusion

Allure Report is being used by millions of people, and it has become a mainstay of test automation, which is why we will continue supporting and perfecting this tool. If you’re spending too much time digging through your test results, or if you want to show them to other people who don’t code – Allure Report is the tool for you.

Author

Dmitry Baev, Author of Allure Framework 

Mikhail Lankin, Content writer, Qameta Software 

· Categorized: AutomationSTAR · Tagged: 2024, EXPO

Jul 01 2024

Elevating Global User Experience with Generative AI in Testing

In today’s competitive, digital landscape, businesses cater to a global audience with diverse needs. However, ensuring a flawless user experience across multiple devices, models, and locations can become a major challenge. Traditional testing methods often fall short, unable to keep up with the unique challenges posed by a global user base. 

Consider, a customer using your app/website in France on a brand-new phone model. An undetected bug specific to that device could derail their experience, leading to lost conversions and frustration. Generative AI (Gen AI)  extends beyond mimicking existing actions. It leverages ML to generate new test cases, data, and scenarios.  

Some Use Cases of Gen AI 

Here are some of Gen AI’s use cases: 

  • Simulating a Broader User Landscape: With Gen AI’s capability of generating diverse test scenarios, it covers various device combinations, languages, and user behaviours. This ensures your platform functions flawlessly for everyone across the globe.  
  • Identifying Hidden Bugs: By generating edge cases and unexpected user scenarios, Gen AI helps identify the critical bugs that traditional testing might miss. This approach results in a more robust and user-friendly website/app. 
  • Data-Driven Insights: Gen AI can analyze the vast amount of data aggregated during the testing phase and provide deep analytics into user behavior and pain points. This data can further be leveraged to refine the design, prioritize certain features, and ensure a user-friendly experience. 
  • Improving Accessibility Testing: Generative AI can simulate interactions of users with disabilities, helping identify and address accessibility issues to ensure inclusivity for everyone. 

Benefits of Integrating Gen AI in Testing

Integrating Gen AI into your QA testing strategy offers a multitude of benefits: 

  • Delivering Smooth UX: Gen AI ensures a smoother and more intuitive user experience by identifying and solving potential challenges before the launch. As a result, This leads to higher user satisfaction, increased engagement, and brand credibility. 
  • Lesser Development Costs: Gen AI helps in identifying critical bugs early in the development cycles, saving costly redesign efforts later. Additionally, the efficiency gains from automated testing free up resources for other higher-value tasks. 
  • Enhanced Products for Improved ROI: Gen AI helps in creating products that cater to a global user base, anticipating different user needs. A seamless, user-friendly website/app often ensures happy and satisfied customers, and gives businesses a competitive edge. This further helps in creating a positive impact on the overall ROI. 

Real-World Examples of Gen AI in Action

Several companies are leveraging Gen AI in testing to enhance their UX capabilities. These include: 

  • Amazon: The e-commerce giant, Amazon uses Gen AI to craft user personas that reflect the shopping habits and preferences of their global customers. As a result, they can easily fine-tune their platform for optimal performance and user satisfaction. 
  • Netflix: To enhance user engagement and keep viewers hooked on their favorite shows and movies, Netflix leverages Gen AI to create personalized user interfaces based on individual viewing preferences. 
  • Facebook: Social media platform, Facebook has been using Gen AI to to test various newsfeed algorithms and layouts across different user profiles. The testing insights collected have helped them to deliver a personalized experience that keeps users returning for more. 

Final Thoughts

While Gen AI is still in the early stages, in the upcoming years, we can expect it to drive more advancements and innovations in the software testing landscape. According to the Future of Quality Assurance Report, almost 50% of the teams are already using Gen AI for test case generation. With this, we can expect the number will surely increase and lead to more applications of Gen AI in the coming years. 

For instance, by analyzing user feedback and social media sentiment, AI can show deeper insights into user experience challenges. Other advancements can also include predicting user behavior by identifying trends and allowing for proactive design tweaks before issues surface.

As the technology landscape continues to evolve, we can expect many more applications of Gen AI in testing that will help deliver flawless products that resonate well with user expectations. 

Author 


Mudit Singh, Head of Marketing and Growth at LambdaTest

A product and growth expert with 15+ years of experience in building great software products. A part of LambdaTest’s founding team, Mudit Singh has been deep diving into software testing processes working towards the aim of bringing all testing ecosystems to the cloud.  Mudit currently leads marketing for LambdaTest as Head of Marketing & Growth. LambdaTest is a leading continuous quality testing cloud platform, headquartered in San Francisco, US. LambdaTest has 2mn+ users and 10,000+ customers across the globe.

Lambdatest is an EXPO Exhibitor at AutomationSTAR 2024, join us in Vienna.

· Categorized: AutomationSTAR · Tagged: 2024, EXPO

  • « Go to Previous Page
  • Page 1
  • Page 2
  • Page 3
  • Page 4
  • Page 5
  • Go to Next Page »

Copyright © 2025 · Impressum · Privacy · T&C