3 Days Online
A practical, hands-on course for experienced testers who want to integrate generative AI into real testing work—responsibly and effectively.
Rather than treating AI as a testing replacement or a standalone skill, this course positions generative AI as a collaborator that supports tester judgment rather than replacing it. You work with a small but progressively evolving application and use AI throughout the course to explore how it can assist with generating test ideas, inferring requirements, refining test design, modeling system behavior, supporting exploratory testing, analyzing defects, planning strategy, and adapting tests as systems change. The course emphasizes evaluation over generation-understanding where AI output is useful, where it introduces assumptions, and where human testing judgment remains essential.
This course teaches experienced testers how to use generative AI as part of real testing work, not as a shortcut or novelty. Instead of focusing on AI theory, prompt tricks, or certification checklists, it follows the natural flow of testing work—from initial test ideas, through requirement inference, test design, execution, exploration, defect analysis, strategy, and adaptation to change—showing where AI helps, where it fails, and where tester judgment remains essential.
Evaluate generative AI output critically rather than accepting it at face value
Identify assumptions, gaps, and invented details in AI-generated tests and requirements
Infer requirements and business rules from observed system behavior
Define testing purpose and scope based on role, risk, and intent
Design meaningful test data using equivalence partitioning and boundary analysis
Model system behavior using states and transitions
Turn test ideas into executable manual and automated tests
Use AI to support exploratory testing while maintaining tester control
Analyze bug reports to identify patterns, impact, and risk
Create and justify a test strategy aligned with product and business goals
Adapt existing test suites as systems evolve
Jump directly into testing by asking AI to generate test cases with minimal context. Attempt to execute those tests and discover where AI output helps, where it breaks down, and where assumptions have been silently introduced. By comparing results from different AI tools, you learn to recognize the difference between tests that appear reasonable and tests that are actually executable and grounded in observed behavior.
Use AI to generate test cases from a short description or image of a UI page. Compare outputs from multiple chatbots and annotate where assumptions, ambiguities, or missing details appear.
AI was able to generate tests from little more than a snapshot of a UI. In this module, you work in the opposite direction: using tests and observed behavior to infer requirements and business rules. You examine how AI fills gaps with assumptions, learn to separate confirmed behavior from speculation, and use requirement taxonomies to organize and correct AI-generated statements. The focus is on requirements as testable descriptions, not authoritative truths.
Use AI to extract implied requirements and business rules from an existing test set. Review, correct, and reorganize those statements using a requirements classification taxonomy.
Once AI can generate large volumes of tests, the challenge is no longer quantity—it is relevance. In this module, you step back to define your role as a tester and clarify the purpose of your testing. You examine how different testing perspectives lead to different test choices, and how AI must be guided by intent in order to be useful rather than overwhelming.
Use AI to propose testing focus areas based on different tester roles. Refine prompts to reflect your own role and use AI output to help narrow an overly broad test set.
Real test cases require more than placeholder phrases such as “enter a valid value.” They require meaningful data choices that support coverage. In this module, you apply equivalence partitioning and boundary value analysis as thinking tools to structure input domains. AI is used to suggest partitions and boundaries, which you then evaluate for relevance, assumptions, and completeness.
Ask AI to propose equivalence partitions, boundary values, and concrete data values for each input. Review and refine those suggestions based on observed system behavior.
Tests for single pages and functions are only a starting point. As soon as systems retain information or react differently based on prior actions, state becomes part of the behavior under test. In this module, you explore how even simple features introduce state, how state increases test scope, and how AI can help model system behavior without overwhelming you with unnecessary complexity.
Use AI to generate a state model for a feature that includes actions such as clearing inputs or resetting results. Review and simplify the model to reflect only meaningful states.
What does it actually mean for a test case to be executable? This module focuses on the information required for another tester to run a test without interpretation. Use AI to refine test cases to include clear steps, concrete data, and observable outcomes. AI is used to suggest refinements, then you evaluate whether those refinements achieve clarity or introduce new problems.
Have AI rewrite incomplete test cases into fully executable form. Review the results, correct inaccuracies, and optionally translate the tests into Gherkin.
Automation promises faster execution and broader regression coverage—but not every test should be automated. In this module, you explore how AI can assist with test automation while learning to recognize where automation adds value and where it introduces risk.
Use AI to generate automated test scripts for selected scenarios. Review the scripts and examine execution output or sample evidence.
Not all testing follows predefined steps. Exploratory testing relies on learning, observation, and investigation. In this module, you use AI to assist with planning and summarizing exploratory work while maintaining tester control over direction and interpretation.
Ask AI to help generate an exploratory testing charter. After exploration, use AI to assist with summarizing findings and drafting defect reports.
Bug reports are more than isolated failures—they represent patterns, risks, and communication challenges. In this module, you analyze existing bug reports with AI support, learning where automated classification helps and where it distorts meaning or priority.
Use AI to analyze a set of existing bug reports, identify themes, and produce summary insights. Review and correct the output to ensure accuracy.
Individual tests do not add up to a strategy. In this module, you step back to plan testing deliberately, focusing on risk, priorities, and trade-offs. AI is used to suggest inputs and challenge assumptions, while responsibility for decisions remains with you.
Use AI to help populate a Test Strategy Canvas. Review and revise the strategy based on context, risk, and constraints.
Systems evolve. New stories extend existing behavior, introduce new states, and invalidate old assumptions. In this module, you practice using AI to adapt tests as systems change while maintaining control over scope, coverage, and intent.
Use AI to convert observed system changes into user stories, acceptance criteria, and updated test scenarios. Review and integrate changes into the existing test suite.
So what does all of this mean? Now it all comes together. You'll assess where AI can add the most value in your own practice, plan small experiments, and design an ethical framework for responsible adoption. The focus is on practical next steps—how to pilot, measure, and lead change as an AI-empowered tester.
Reflect on your own testing practice and identify areas where AI could realistically add value. Use AI to brainstorm small, low-risk pilot experiments and success measures. Define guardrails for ethical and responsible use, including data handling, bias awareness, and accountability. Produce a short personal or team roadmap outlining how you will integrate AI into your testing work and evaluate its impact over time.
Get practical AI training that keeps your requirements, designs, code, and tests in sync.
We offer private training sessions for teams. Contact us to discuss your needs.
Last updated January 26, 2026