3 Days Online
A practical, hands-on course for experienced testers who want to integrate generative AI into real testing work—responsibly and effectively.
Rather than treating AI as a testing replacement or a standalone skill, the course positions AI as a collaborator that supports testing judgment, not a substitute for it. You will work with a small, realistic application and use AI throughout the course to explore how it can assist with understanding system behavior, generating and refining tests, structuring test ideas, exploring features, reporting defects, and making informed decisions about automation.
This course stands out because it teaches experienced testers how to use generative AI as part of real testing work, not as a shortcut or a novelty. Instead of focusing on AI theory, tool features, or certification checklists, it follows the natural flow of testing—from observing behavior and forming understanding, through test design, exploration, defect reporting, strategy, and automation—showing where AI helps, where it fails, and how testers stay in control.
Use generative AI to explore and understand system behavior when documentation is incomplete or unclear
Write prompts that describe observed behavior, constraints, and test intent clearly
Identify assumptions, gaps, and invented details in AI-generated test cases
Turn AI-generated test ideas into executable tests with clear steps and observable outcomes
Structure tests using partitions, boundaries, state, and sequences with AI support
Use AI to generate exploratory testing ideas without losing tester control or focus
Evaluate AI-generated nonfunctional test ideas for relevance and testability
Write clearer bug reports and evaluate AI-assisted summaries and metrics
Make informed decisions about what to automate and what not to automate
Create a practical plan for integrating generative AI into your own testing work
Begin by exploring what generative AI actually does and how it applies to software testing in practice. You will compare outputs from different AI tools, observe how confident-sounding responses can still be incomplete or incorrect, and examine how AI changes testing work—not by replacing tester judgment, but by accelerating certain activities. This module establishes AI as a collaborator whose output must be evaluated, challenged, and corrected.
Reflect on your role as a tester, including your core activities, the artifacts you create, the inputs you receive, and the different perspectives that best fit your work. Then explore how AI could support you in that role. This involves writing a clear prompt that describes your work, inputs, outputs, and perspective, then asking how AI might help. Compare responses from multiple chatbots and use follow-up questions to refine and deepen the results.
Jump directly into test development by asking AI to generate test cases with minimal context. Attempt to execute those tests and discover where AI output helps, where it breaks down, and where assumptions have been silently introduced. By comparing results from different AI tools, you will learn to recognize the difference between tests that appear reasonable and tests that are actually executable and grounded in observed behavior.
Starting with little more than an image of a UI page or a short general description, have AI generate a series of test cases for that page.
AI was able to generate tests from little more than a snapshot of a UI. In this module, you work in the opposite direction using tests and observed behavior to infer requirements and business rules. You will examine how AI fills gaps with assumptions, learn to separate confirmed behavior from speculation, and use requirement taxonomies to organize and correct AI-generated statements. The focus is on requirements as testable descriptions, not authoritative truths.
Ask your AI chatbots to enumerate the requirements implied by the test cases. Are any of the requirements wrong or unclear? Prompt to correct them. Select one of the requirements classification taxonomies. Use AI to classify the requirements according to that taxonomy. Ask AI to construct a traceability matrix relating test cases to requirements.
Real test cases require more than placeholder phrases such as "enter a valid value." They require meaningful data choices that support coverage. In this module, you use equivalence partitioning and boundary value analysis as thinking tools to structure input domains and identify important conditions. AI is used to suggest partitions and boundaries, which you then evaluate for relevance, assumptions, and completeness before selecting representative data values.
Have AI review and update the mileage calculator tests using equivalence partitioning and boundary value analysis. Define equivalence partitions for each input and identify boundary values for each partition. Have AI generate real data values for the inputs and identify any necessary pre-existing data.
What does it actually mean for a test case to be executable? This module focuses on the information required for another tester to run a test without interpretation. Use AI to refine test cases to include clear steps, concrete data, and observable outcomes. AI is used to suggest refinements, then you evaluate whether those refinements achieve clarity or introduce new problems.
Take a set of incomplete test cases and use AI to evaluate whether or not they are clear and consistent enough to be run without interpretation. Ask AI to improve the tests by adding details and clear instructions. Then have AI translate these tests into Gherkin Given-When-Then notation.
User stories describe changes to a system, and acceptance criteria define the conditions under which those changes are considered complete. In this module, you examine how acceptance criteria, scenarios, and test cases relate in practice. You will explore the many-to-many relationship between them and use AI to draft and refine acceptance criteria while ensuring that scenarios remain grounded in realistic system behavior.
Given a set of user stories, collaborate with AI to define the acceptance criteria for these stories. Then have AI write new or enhanced scenarios and test cases for these stories. Try testing each story independently and in combination. Finally, have AI create a mapping table between stories, acceptance criteria, and scenarios or test cases.
Although tests for single pages and functions are a good start and can catch a lot of bugs, eventually we'll need to test larger units of functionality. These can be abstracted as different states and visualized graphically. Learn state-based thinking as a way to model sequences, transitions, and cumulative effects. Use AI to help sketch state diagrams and identify valid transitions, then reason about functional coverage in terms of states, transitions, and invariants rather than test counts.
Use your chatbots to create a state model illustrating the different states of the sample application and the behaviors that cause transition from page to page. Use AI to determine if the test cases from the last exercise fully cover the paths through that state model. If not, what tests need to be added? As a bonus, have AI generate a UI navigation map for an extended version of the application.
Functional correctness is not enough. Systems must also meet quality expectations such as usability, accessibility, performance, and security. Use AI to evaluate nonfunctional requirements and to convert these from vague aspirations into solid testable objectives. Design tests that ensure systems meet quality criteria essential for user satisfaction and system reliability.
Use AI to improve loosely-defined nonfunctional requirements, make them testable, and identify techniques for testing these requirements.
Individual tests do not add up to a test strategy. Learn how AI can help you to make deliberate testing choices aligned with business goals and risk. Use a Test Strategy canvas to outline and plan a comprehensive testing strategy. See how AI can draft inputs and challenge assumptions, while responsibility for prioritization and trade-offs remains with the tester.
Use AI to create a Test Strategy Canvas—a single concise view of the overall approach to testing a full application.
Bug reports are both technical findings and communication artifacts. Practice using AI to analyze bug reports to distinguish real failures from expected behavior or misunderstandings, assess impact and risk, and communicate findings effectively to different audiences. Although AI is used to assist with summarization and categorization, learn to evaluate where this help improves clarity and where it distorts meaning or priority.
Ask AI chatbots to come up with a list of potential bugs, different kinds of risks, and a set of sample bug reports. Then take a list of actual bug reports and have the AI analyze and classify these into an actionable report.
Automating test execution promises to make it easier and cheaper to run large test suites more frequently—a practice essential to modern agile development and DevOps practices. Explore the different kinds of automated tests. See how AI can generate automated tests for different levels and architectures. Use AI to evaluate existing systems' suitability for automation and to check and improve existing automation suites.
While no programming or automation tool experience is necessary, get a sense of the power of automated testing by observing demonstrations of different kinds of automation. Ask AI chatbots for advice about automating different test scenarios.
So what does all of this mean? Now it all comes together. You'll assess where AI can add the most value in your own practice, plan small experiments, and design an ethical framework for responsible adoption. The focus is on practical next steps—how to pilot, measure, and lead change as an AI-empowered tester.
Reflect on your own testing practice and identify areas where AI could realistically add value. Use AI to brainstorm small, low-risk pilot experiments and success measures. Define guardrails for ethical and responsible use, including data handling, bias awareness, and accountability. Produce a short personal or team roadmap outlining how you will integrate AI into your testing work and evaluate its impact over time.
Get practical AI training that keeps your requirements, designs, code, and tests in sync.
We offer private training sessions for teams. Contact us to discuss your needs.