1 Day Online
A practical foundation in generative AI for business analysts and software testers, focusing on how AI actually behaves and how it changes professional work.
Rather than approaching AI as a tool to be mastered through templates, techniques, or prompt engineering, the course centers on judgment, evaluation, and shared understanding. Participants work directly with generative AI from the start, using familiar analysis and testing tasks to reset common assumptions. The course explores how AI builds and revises a model of the problem, and how common artifacts—such as requirements, scenarios, and tests—act as different views into that shared understanding.
This course begins with experience, not theory. Participants start by working directly with generative AI on realistic analysis and testing tasks. Rather than teaching fixed prompts or patterns, the course emphasizes ongoing interaction and prioritizes judgment over memorization. It provides a grounded alternative to tool-driven or hype-driven approaches by clarifying how generative AI changes the nature of professional responsibility.
Explain what generative AI is and what it is not
Describe how different assumptions about AI behavior lead to different outcomes
Recognize how generative AI builds and revises an internal model of a problem
See how different artifacts reflect a shared underlying understanding
Work with AI through iterative dialogue over multiple turns
Refine, correct, and redirect AI output as understanding evolves
Identify gaps, assumptions, inconsistencies, and confidently wrong output
Use disagreement and error as signals to test understanding
Compare outputs from different AI models to surface blind spots
Reconcile differing AI perspectives into a more coherent understanding
Decide when AI output is useful input for further reasoning
Decide when professional judgment calls for setting AI output aside
Enter AI for Business Analysis or AI for Software Testing courses with a shared foundation
Share a common vocabulary for discussing AI behavior and professional responsibility
No upcoming dates. Inquire for details.
This module resets common assumptions about generative AI by examining what it is not. Participants work directly with AI to experience why treating it as a search engine, programming language, or database leads to confusion, and why a different mental model is required. The module establishes that confidence and fluency do not imply correctness, and that prompts are not simply queries or commands.
Interact with generative AI using familiar questions and tasks, observe where expectations break down, and reflect on what those breakdowns reveal about the nature of generative AI.
This module introduces the idea that generative AI builds a model of the problem and produces different artifacts as views into that shared understanding. Participants explore how requirements, scenarios, and tests relate to one another and what happens when understanding changes. The focus shifts from editing documents to refining shared understanding.
Generate multiple artifacts from the same AI conversation, introduce a targeted correction, and observe how changes propagate across those artifacts.
This module examines what happens when AI produces plausible but incorrect or inconsistent output. By comparing results from different AI models, participants learn to treat disagreement and hallucinations as diagnostic signals that reveal gaps and assumptions. The emphasis is on using inconsistency to surface assumptions and test understanding rather than simply chasing correctness.
Ask multiple AI systems to address the same problem, compare their outputs, and use points of disagreement to improve the shared understanding of the problem.
The final module brings the previous ideas together to examine how generative AI changes the nature of professional responsibility. As AI takes over routine transformation, gaps in understanding surface faster and misalignment becomes harder to ignore. Participants clarify where human accountability remains essential and reframe professional competence around shared understanding.
Determine how AI fits into your own role, using the insights from earlier modules to recognize where judgment is required, where responsibility remains human, and how your understanding of professional work has evolved.
Get practical AI training that keeps your requirements, designs, code, and tests in sync.
We offer private training sessions for teams. Contact us to discuss your needs.