What you will Learn
- Transform vague AI mandates into testable experiments with clear success criteria, time-boxed to 1-2 weeks, and scoped to a single feature or team
- Design 2-3 lightweight metrics that measure AI impact on actual quality outcomes (defect detection, false positive rates, investigation time) using existing tooling
- Make evidence-based scale/pivot/abandon decisions about AI testing approaches by running small experiments that prove (or disprove) value in their specific context
Session Details
- Intermediate
- 90 minutes
- Hands-on workshop
- AI-Augmented Automation (Gen AI)
Session Speaker

Katja Obring
Kato Coaching Ltd., United Kingdom
Kat Obring is the founder of Kato Coaching and creator of the Q.E.D. (Quality-focused Experimentation and Development) framework. With over 20 years in the software industry, Kat now focuses on what matters most: teaching teams and individuals how to measurably improve the quality of their work. Her practical frameworks combine insights from her diverse experience as a DevOps QA engineer, Head of Delivery, and, surprisingly, her early career as a chef. She specialises in helping QA and test professionals transform from reactive testers to strategic quality advisors through evidence-based decision-making. She's learned that evidence always beats guesswork, and a well-designed experiment will reveal more truth than months of planning ever could.





