3.2.3. Responsible AI Principles
š” First Principle: Microsoft's Responsible AI framework defines how AI should be built and used ā not as a set of rules imposed on customers, but as a shared framework for building trustworthy AI systems. The six principles guide both how Microsoft develops Copilot and how organizations should govern AI use in their environments.
The principles are:
| Principle | What It Means | Practical Example |
|---|---|---|
| Fairness | AI should treat all people equitably, avoiding bias | Copilot responses shouldn't differ based on user demographics |
| Reliability and Safety | AI should perform consistently and safely under varied conditions | Copilot should not generate harmful content or fail unpredictably |
| Privacy and Security | AI should protect user data and respect privacy | Copilot doesn't use one user's data to train responses for others |
| Inclusiveness | AI should benefit all people and avoid exclusion | Copilot should be accessible to users with disabilities |
| Transparency | AI should be understandable ā users should know they're interacting with AI | Copilot identifies itself as an AI and cites its sources |
| Accountability | People should remain responsible for AI decisions | Admins and users are accountable for how they use Copilot |
š” Key Point: For the exam, focus on recognizing each principle by name and matching it to a scenario. "A company wants to ensure that Copilot doesn't make hiring decisions without human review" maps to Accountability. "Users should know when a response was generated by AI" maps to Transparency.
ā ļø Exam Trap: Responsible AI principles are not a compliance regulation or legal requirement imposed by Microsoft. They're a framework that Microsoft uses internally and shares as guidance. Organizations are responsible for governing their own AI use in accordance with applicable laws and their own policies.
Reflection Question: Your organization implements a policy that all Copilot-generated content in customer communications must be reviewed by a human before sending. Which Responsible AI principle does this policy most directly support?