2.5. Reflection Checkpoint
Key Takeaways
Before proceeding, ensure you can:
- Distinguish between generative AI and traditional AI use cases
- Explain when to use pretrained versus fine-tuned models
- Describe how tokens drive generative AI costs and calculate basic ROI
- Explain when reasoning models add value over standard models
- Identify fabrication, reliability, and bias risks and their mitigations
- Explain how data type, quality, and representativeness affect AI outcomes
- Describe how prompt engineering affects AI output quality
- Explain how RAG/grounding enables organization-specific AI responses
- Articulate key secure AI principles, including authentication requirements
- Describe the ML lifecycle and explain why models need ongoing monitoring
Connecting Forward
In Phase 3, you'll learn the specific Microsoft products and services that implement these concepts. Understanding what generative AI is and how it creates value prepares you to understand which Microsoft tools to recommend for specific scenarios.
Self-Check Questions
-
A financial services company wants to automatically categorize incoming support tickets as "account issue," "billing question," or "general inquiry." Would you recommend generative AI or traditional AI for this task? Why?
-
An employee says Copilot's responses about company policies are sometimes wrong. They suggest fine-tuning the model on company documents. Is this the right approach? What would you recommend instead?
-
A company wants to measure the ROI of their Copilot deployment. They propose tracking "number of Copilot licenses deployed." Why is this metric insufficient, and what would you suggest instead?