Risk classification

Understanding the EU AI Act

Dan Nechita

Lead Technical Negotiator, EU AI Act

Providers versus deployers

 

  • Providers build AI, deployers use AI.
  • Providers have most safety obligations.
  • Deployers also have specific obligations.
  • Example: Microsoft (provider), insurance company (deployer).

iStock-1470986827.jpg

Understanding the EU AI Act

The pyramid of risk

Obligations depend on AI's "intended use" and tiered risk levels: unacceptable, high, limited, and none.

Risk pyramid with four levels.png

Understanding the EU AI Act

Unacceptable Risk

Obligations depend on AI's "intended use" and tiered risk levels: unacceptable, high, limited, and none.

Unacceptablerisk highlighted.png

Understanding the EU AI Act

High Risk

Obligations depend on AI's "intended use" and tiered risk levels: unacceptable, high, limited, and none.

High risk highlighted.png

Understanding the EU AI Act

Limited and no risk

Chatbots, deep fakes, and AI-generated content are the prime examples.

Limited and no risk highlighted.png

Understanding the EU AI Act

GPAI

 

  • General purpose AI providers have risk-based obligations.
  • Basic transparency for most, risk-mitigation for systemic risks.

iStock-1481304994.jpg

Understanding the EU AI Act

Let's practice!

Understanding the EU AI Act

Preparing Video For Download...