3 Days Online

AI for Software Testing

A practical, hands-on course for experienced testers who want to integrate generative AI into real testing work—responsibly and effectively.

Rather than treating AI as a testing replacement or a standalone skill, this course positions generative AI as a collaborator that supports tester judgment rather than replacing it. You work with a small but progressively evolving application and use AI throughout the course to explore how it can assist with generating test ideas, inferring requirements, refining test design, modeling system behavior, supporting exploratory testing, analyzing defects, planning strategy, and adapting tests as systems change. The course emphasizes evaluation over generation-understanding where AI output is useful, where it introduces assumptions, and where human testing judgment remains essential.

Why This Course

This course teaches experienced testers how to use generative AI as part of real testing work, not as a shortcut or novelty. Instead of focusing on AI theory, prompt tricks, or certification checklists, it follows the natural flow of testing work—from initial test ideas, through requirement inference, test design, execution, exploration, defect analysis, strategy, and adaptation to change—showing where AI helps, where it fails, and where tester judgment remains essential.

Course Details

  • Duration
    3 Days online
  • Format
    Hands-on exercises using a progressively evolving application, with guided use of generative AI to support real testing activities such as test design, analysis, modeling, execution, exploration, and planning.
  • Audience
    Software testers, Test analysts, QA engineers, Test leads

What You'll Be Able to Do

1

Evaluate generative AI output critically rather than accepting it at face value

2

Identify assumptions, gaps, and invented details in AI-generated tests and requirements

3

Infer requirements and business rules from observed system behavior

4

Define testing purpose and scope based on role, risk, and intent

5

Design meaningful test data using equivalence partitioning and boundary analysis

6

Model system behavior using states and transitions

7

Turn test ideas into executable manual and automated tests

8

Use AI to support exploratory testing while maintaining tester control

9

Analyze bug reports to identify patterns, impact, and risk

10

Create and justify a test strategy aligned with product and business goals

11

Adapt existing test suites as systems evolve

Upcoming Sessions

Mar 23 – Mar 25, 2026
Open
Live Online
May 18 – May 20, 2026
Open
Live Online
Jun 8 – Jun 10, 2026
Open
Live Online
Jul 20 – Jul 22, 2026
Open
Live Online
Sep 21 – Sep 23, 2026
Open
Live Online
Nov 18 – Nov 20, 2026
Open
Live Online

Course Outline

1

Let's Test!

Jump directly into testing by asking AI to generate test cases with minimal context. Attempt to execute those tests and discover where AI output helps, where it breaks down, and where assumptions have been silently introduced. By comparing results from different AI tools, you learn to recognize the difference between tests that appear reasonable and tests that are actually executable and grounded in observed behavior.

Objectives

  • Generate test ideas from limited system information using AI
  • Analyze AI-generated tests for missing details and invented behavior
  • Distinguish plausible test wording from executable test instructions
  • Compare outputs across multiple AI tools
  • Evaluate where tester judgment is required despite confident AI output

Exercise

Use AI to generate test cases from a short description or image of a UI page. Compare outputs from multiple chatbots and annotate where assumptions, ambiguities, or missing details appear.

2

Tests as Specifications

AI was able to generate tests from little more than a snapshot of a UI. In this module, you work in the opposite direction: using tests and observed behavior to infer requirements and business rules. You examine how AI fills gaps with assumptions, learn to separate confirmed behavior from speculation, and use requirement taxonomies to organize and correct AI-generated statements. The focus is on requirements as testable descriptions, not authoritative truths.

Objectives

  • Infer requirements and business rules from test cases
  • Distinguish confirmed behavior from assumptions and speculation
  • Identify inaccuracies in AI-generated requirement statements
  • Organize inferred requirements using a classification approach
  • Evaluate the reliability of AI-assisted traceability

Exercise

Use AI to extract implied requirements and business rules from an existing test set. Review, correct, and reorganize those statements using a requirements classification taxonomy.

3

Testing with Purpose

Once AI can generate large volumes of tests, the challenge is no longer quantity—it is relevance. In this module, you step back to define your role as a tester and clarify the purpose of your testing. You examine how different testing perspectives lead to different test choices, and how AI must be guided by intent in order to be useful rather than overwhelming.

Objectives

  • Articulate your testing role and responsibilities
  • Define a clear testing purpose for a system or feature
  • Evaluate AI-generated tests against that purpose
  • Reduce test scope based on role, risk, and intent
  • Justify inclusion and exclusion decisions

Exercise

Use AI to propose testing focus areas based on different tester roles. Refine prompts to reflect your own role and use AI output to help narrow an overly broad test set.

4

Choosing Test Data

Real test cases require more than placeholder phrases such as “enter a valid value.” They require meaningful data choices that support coverage. In this module, you apply equivalence partitioning and boundary value analysis as thinking tools to structure input domains. AI is used to suggest partitions and boundaries, which you then evaluate for relevance, assumptions, and completeness.

Objectives

  • Apply equivalence partitioning to input domains
  • Apply boundary value analysis to identify edge conditions
  • Evaluate AI-suggested partitions for correctness and relevance
  • Select representative test data aligned with testing purpose
  • Distinguish observed behavior from inferred constraints

Exercise

Ask AI to propose equivalence partitions, boundary values, and concrete data values for each input. Review and refine those suggestions based on observed system behavior.

5

State Matters

Tests for single pages and functions are only a starting point. As soon as systems retain information or react differently based on prior actions, state becomes part of the behavior under test. In this module, you explore how even simple features introduce state, how state increases test scope, and how AI can help model system behavior without overwhelming you with unnecessary complexity.

Objectives

  • Identify system states that affect observable behavior
  • Describe transitions triggered by user actions
  • Construct simple state models with AI assistance
  • Evaluate AI-generated models for unnecessary complexity
  • Explain how state expands testing beyond individual pages

Exercise

Use AI to generate a state model for a feature that includes actions such as clearing inputs or resetting results. Review and simplify the model to reflect only meaningful states.

6

Making Tests Executable

What does it actually mean for a test case to be executable? This module focuses on the information required for another tester to run a test without interpretation. Use AI to refine test cases to include clear steps, concrete data, and observable outcomes. AI is used to suggest refinements, then you evaluate whether those refinements achieve clarity or introduce new problems.

Objectives

  • Identify missing information that prevents test execution
  • Refine tests to include concrete steps, data, and outcomes
  • Evaluate AI-generated details for clarity and correctness
  • Translate test intent into structured formats such as Given-When-Then
  • Verify internal consistency within test cases

Exercise

Have AI rewrite incomplete test cases into fully executable form. Review the results, correct inaccuracies, and optionally translate the tests into Gherkin.

7

Automating Tests

Automation promises faster execution and broader regression coverage—but not every test should be automated. In this module, you explore how AI can assist with test automation while learning to recognize where automation adds value and where it introduces risk.

Objectives

  • Distinguish tests suitable for automation from manual tests
  • Evaluate AI-generated automation scripts
  • Analyze automation execution results and evidence
  • Identify common automation risks and anti-patterns
  • Justify automation decisions based on value and cost

Exercise

Use AI to generate automated test scripts for selected scenarios. Review the scripts and examine execution output or sample evidence.

8

Exploratory Testing

Not all testing follows predefined steps. Exploratory testing relies on learning, observation, and investigation. In this module, you use AI to assist with planning and summarizing exploratory work while maintaining tester control over direction and interpretation.

Objectives

  • Construct exploratory testing charters
  • Use AI to generate investigation ideas
  • Observe unexpected behavior during exploration
  • Evaluate AI-generated summaries of findings
  • Produce clear defect reports

Exercise

Ask AI to help generate an exploratory testing charter. After exploration, use AI to assist with summarizing findings and drafting defect reports.

9

Making Sense of Bugs

Bug reports are more than isolated failures—they represent patterns, risks, and communication challenges. In this module, you analyze existing bug reports with AI support, learning where automated classification helps and where it distorts meaning or priority.

Objectives

  • Analyze bug reports for patterns and trends
  • Classify defects by impact and risk
  • Evaluate AI-generated bug summaries
  • Synthesize defects into actionable insights
  • Communicate findings clearly

Exercise

Use AI to analyze a set of existing bug reports, identify themes, and produce summary insights. Review and correct the output to ensure accuracy.

10

Test Strategy and Planning

Individual tests do not add up to a strategy. In this module, you step back to plan testing deliberately, focusing on risk, priorities, and trade-offs. AI is used to suggest inputs and challenge assumptions, while responsibility for decisions remains with you.

Objectives

  • Identify key risks driving testing priorities
  • Construct a Test Strategy Canvas
  • Evaluate AI-generated strategy inputs
  • Prioritize testing activities intentionally
  • Justify strategic trade-offs

Exercise

Use AI to help populate a Test Strategy Canvas. Review and revise the strategy based on context, risk, and constraints.

11

Change Happens

Systems evolve. New stories extend existing behavior, introduce new states, and invalidate old assumptions. In this module, you practice using AI to adapt tests as systems change while maintaining control over scope, coverage, and intent.

Objectives

  • Identify behavioral changes introduced by new features
  • Translate changes into user stories and acceptance criteria
  • Generate scenarios that extend existing test coverage
  • Evaluate coverage impact caused by change
  • Adapt tests without uncontrolled growth

Exercise

Use AI to convert observed system changes into user stories, acceptance criteria, and updated test scenarios. Review and integrate changes into the existing test suite.

12

Integrating AI Into Your Testing

So what does all of this mean? Now it all comes together. You'll assess where AI can add the most value in your own practice, plan small experiments, and design an ethical framework for responsible adoption. The focus is on practical next steps—how to pilot, measure, and lead change as an AI-empowered tester.

Objectives

  • Identify high-potential areas to pilot AI within your testing processes
  • Plan change-management and measurement strategies for AI adoption
  • Develop an ethical framework for responsible use of AI tools and data
  • Create a personal or team roadmap for integrating AI as a productivity and quality multiplier

Exercise

Reflect on your own testing practice and identify areas where AI could realistically add value. Use AI to brainstorm small, low-risk pilot experiments and success measures. Define guardrails for ethical and responsible use, including data handling, bias awareness, and accountability. Produce a short personal or team roadmap outlining how you will integrate AI into your testing work and evaluate its impact over time.

Ready to learn?

Get practical AI training that keeps your requirements, designs, code, and tests in sync.

We offer private training sessions for teams. Contact us to discuss your needs.

Contact Us

Last updated January 26, 2026