On the EU AI Code of Practice

Hyperdimensional

How we got here

When the European Commission began drafting the AI Act in 2021, society’s understanding of AI was fundamentally different than it is today. Before 2022, “AI” meant, largely, narrow machine learning systems: computer vision models used for quality assurance in factories and farms, predictive models intended to help banks process loan applications, systems intended to help enhance electrical grid efficiency, and the like. These systems were used for discrete functions, often aiding firms and individuals in making specific kinds of decisions (Is this part defective? Is this loan candidate likely to repay their loan?).

At the time, few outside the still-relatively-small AI industry imagined the “generalist” AI systems like ChatGPT that are a common part of life today. These new systems came out before the AI Act had officially passed the European Parliament, but after much of the law had been negotiated. Rather than starting from scratch, European regulators chose to add in a placeholder about “general-purpose AI models,” and to flesh out the details later.

Two weeks ago, the first product of this process was released: the draft General-Purpose AI Code of Practice. The first round of comments are due on November 28. This essay constitutes our comment.

Discuss

OnAir membership is required. The lead Moderator for the discussions is . We encourage civil, honest, and safe discourse. For more information on commenting and giving feedback, see our Comment Guidelines.

This is an open discussion on this news piece.

Home Forums Open Discussion

Viewing 1 post (of 1 total)
  • Author
    Posts
  • #9845
    Anonymous
    Inactive
Viewing 1 post (of 1 total)
  • You must be logged in to reply to this topic.
Skip to toolbar