Boundaries and Limitations of AI

Introduction to AI for Work

James Chapman

AI Curriculum Manager, DataCamp

Risk #1: Knowledge Fabrication

 

  • Or Hallucination: false information incorrectly presented as correct by AI
  • Common Examples:
    • Incorrect statistics
    • Fake citations
    • Hallucinate events

hallucination1.jpg

Introduction to AI for Work

Risk #1: Knowledge Fabrication

 

  • Or Hallucination: false information incorrectly presented as correct by AI
  • Common Examples:
    • Incorrect statistics
    • Fake citations
    • Hallucinate events

hallucination2.jpg

Introduction to AI for Work

Risk #1: Knowledge Fabrication

hallucination3.jpg

Introduction to AI for Work

Risk #2: Recency Ignorance

 

  • Knowledge cutoff date → date limit on the model's knowledge of events
  • Examples:

    • Outdated regulations, prices, locations, and coding syntax
  • Confirm time-sensitive facts from current sources

  • Many AI tools also support internet search

recency.png

Introduction to AI for Work

Risk #3: Biased Outputs

biased.png

 

  • Reflect and amplify social bias from its training data
  • Result → stereotypical or discriminatory content
  • Review AI-generated content carefully!
Introduction to AI for Work

Risk #4: Sycophantic Outputs

 

  • Sycophantic → telling you what it thinks you want to hear

    • Validate perspectives, support decisions, and affirm thinking
  • Problematic when you require a critical perspective

sycophancy.png

Introduction to AI for Work

Risk #4: Sycophantic Outputs

 

  • Sycophantic → telling you what it thinks you want to hear

    • Validate perspectives, support decisions, and affirm thinking
  • Problematic when you require a critical perspective

  • By-product of how LLMs are trained on human ratings of responses

 

sycophany_research.png

1 https://arxiv.org/abs/2510.01395
Introduction to AI for Work

Risk #5: Privacy and Data Exposure

privacy.png

 

  • Information shared with AI tools may not stay private!
  • Uploading spreadsheets, documents, or client data can be risky
  • Data may be stored on external servers, used for model training, or searchable
Introduction to AI for Work

Risk #5: Privacy and Data Exposure

crying.png

 

Risks

  • Violating privacy laws (e.g., GDPR)
  • Losing a competitive advantage
  • Damaging client trust

 

  • Always check the data policy of the AI tool
  • If in doubt: consult an IT or InfoSec expert
Introduction to AI for Work

From Risk to Responsibility

risks_summary_v2.png

 

car.png

Introduction to AI for Work

Let's practice!

Introduction to AI for Work

Preparing Video For Download...