5.1.10. Explain basic concepts related to artificial intelligence (AI). (Obj. 4.10)
š” First Principle: AI is a tool that augments human capabilities - it's not magic, and understanding its limitations is as important as understanding its capabilities.
Artificial Intelligence is now integrated into everyday technology. As a technician, you need to understand what AI is, how it's used, and critically, where it fails. The exam tests conceptual understanding, not technical implementation.
Core AI Concepts:
| Term | Definition | Example |
|---|---|---|
| Artificial Intelligence | Systems that perform tasks requiring human-like intelligence | Voice assistants, autonomous vehicles |
| Machine Learning (ML) | AI that learns from data rather than explicit programming | Spam filters that improve over time |
| Training Data | The dataset used to teach an ML model | Millions of emails labeled "spam" or "not spam" |
| Model | The trained AI system that makes predictions | The spam filter after training |
| Inference | Using a trained model to make predictions on new data | Filtering a new incoming email |
Types of AI You'll Encounter:
- Predictive AI: Forecasts outcomes based on patterns (spam detection, predictive text, recommendation engines)
- Generative AI: Creates new content - text, images, code (ChatGPT, DALL-E, GitHub Copilot)
- Classification AI: Categorizes data into groups (malware detection, image recognition)
- Natural Language Processing (NLP): Understands and generates human language (chatbots, voice assistants)
AI Integration in IT Systems:
| Application | AI Feature | How It Works |
|---|---|---|
| Spam filtering | Learns from user actions (marking spam) to improve accuracy | |
| Antivirus | Behavioral detection | Identifies malware by suspicious behavior patterns, not just signatures |
| Help Desk | Chatbots | Handles common queries, escalates complex issues to humans |
| Monitoring | Anomaly detection | Alerts when system behavior deviates from learned "normal" patterns |
| Search | Relevance ranking | Predicts which results are most useful based on past clicks |
Critical Limitations - The "HABIT" Framework:
-
H - Hallucinations: Generative AI can confidently produce false information. It doesn't "know" facts - it predicts likely word sequences. Always verify AI outputs against authoritative sources.
-
A - Accuracy depends on training: An AI is only as good as its training data. Garbage in = garbage out. An AI trained on outdated data gives outdated answers.
-
B - Bias inheritance: AI learns from human-created data, including our biases. A hiring AI trained on biased historical data will perpetuate that bias.
-
I - Interpretation required: AI outputs need human judgment. A medical AI might flag an anomaly, but a doctor must interpret it in clinical context.
-
T - Trust boundaries: Know what you can and can't trust AI for. Good for drafting and brainstorming; bad for final decisions on critical matters.
Data Privacy - The Non-Negotiable Rule:
ā ļø NEVER enter sensitive data into public AI systems. This includes:
- Customer PII (names, SSNs, account numbers)
- Company confidential information (financials, trade secrets)
- Passwords, API keys, or credentials
- Patient health information (HIPAA)
- Internal communications
Why? Public AI models may:
- Use your input for future training
- Store inputs in logs that could be breached
- Have no contractual obligation to protect your data
Safe AI Use in IT Support:
ā Appropriate uses:
- "Help me write a PowerShell script to list all disabled user accounts"
- "Explain the difference between NTFS and FAT32"
- "Suggest troubleshooting steps for a printer that won't connect"
- "Summarize this generic error message"
ā Inappropriate uses:
- "Here's our client database - analyze it"
- "Check if this employee's email password 'Summer2024!' is strong"
- "Here's the support ticket with customer SSN - what's wrong?"
Technician's Perspective:
Think of AI as a very knowledgeable but sometimes unreliable junior colleague. You would:
- Review their work before submitting it
- Not share confidential information with them unnecessarily
- Use their suggestions as starting points, not final answers
- Know when to override their recommendations with your expertise
Scenario: You use an AI to help write a script that clears temp files. The AI suggests a command that recursively deletes files. Before running it, you should:
- Read the command carefully - does it target the right directory?
- Test on a non-production system first
- Add safeguards (confirmation prompts, logging)
- Never run code you don't understand on production systems
Reflection Question: A colleague suggests pasting a customer's support ticket (containing their name, address, and account number) into ChatGPT to help draft a response. Why is this problematic, and what would you recommend instead?