Europe Flips the Switch on AI Governance: EU's AI Office and Act Take Effect

Europe Flips the Switch on AI Governance: EU's AI Office and Act Take Effect

I woke up to August 11 with the sense that Europe finally flipped the switch on AI governance. Since August 2, the EU’s AI Office is operational, the AI Board is seated, and a second wave of the EU AI Act just kicked in, hitting general‑purpose AI squarely in the training data. DLA Piper notes that Member States had to name their national competent authorities by August 2, with market surveillance and notifying authorities publicly designated, and the Commission’s AI Office now takes point on GPAI oversight and systemic risk. That means Brussels has a cockpit, instruments, and air‑traffic control—no more regulation by press release.

Loyens & Loeff explains what changed: provisions on GPAI, governance, notified bodies, confidentiality obligations for regulators, and penalties entered into application on August 2. The fines framework is now real: up to 35 million euros or 7% of global turnover for prohibited uses; 15 million or 3% for listed violations; and 1% or 7.5 million for misleading regulators—calibrated down for SMEs. The twist is timing: some sanctions and many high‑risk system duties still bite fully in 2026, but the scaffolding is locked in today.

Baker McKenzie and Debevoise both stress the practical breakpoint: if your model hit the EU market on or after August 2, 2025, you must meet the GPAI obligations now; if it was already on the market, you have until August 2, 2027. That matters for OpenAI’s GPT‑4o, Anthropic’s Claude 3, Meta’s Llama, Mistral’s models, and Google’s Gemini. Debevoise lists the new baseline: technical documentation ready for regulators; information for downstream integrators; a copyright policy; and a public summary of training data sources. For “systemic risk” models, expect additional safety obligations tied to compute thresholds—think red‑team depth, incident reporting, and risk mitigation at scale.

Jones Day reports the Commission has approved a General‑Purpose AI Code of Practice, the voluntary on‑ramp developed with the AI Office and nearly a thousand stakeholders. It sits alongside a Commission template for training‑data summaries published July 24, and interpretive guidelines for GPAI. The near‑term signal is friendly but firm: the AI Office will work with signatories in good faith through 2025, then start enforcing in 2026.

TechCrunch frames the spirit: the EU wants a level playing field, with a clear message that you can innovate, but you must explain your inputs, your risks, and your controls. KYC360 adds the institutional reality: the AI Office, AI Board, a Scientific Panel, and national regulators now have to hire the right technical talent to make these rules bite. That’s where the next few months get interesting—competence determines credibility.

For listeners building or buying AI, the takeaways land fast. Document your model lineage. Prepare a training data summary with a cogent story on copyright. Label AI interactions. Harden your red‑teaming, and plan for compute‑based systemic risk triggers. For policymakers from Washington to Tokyo, Europe just set the compliance floor and the timeline. The Brussels effect is loading.

Thanks for tuning in—subscribe for more. This has been a quiet please production, for more check out quiet please dot ai.

Some great Deals https://amzn.to/49SJ3Qs

For more check out http://www.quietplease.ai

Avsnitt(201)

The Artificial Intelligence Act Summary

The Artificial Intelligence Act Summary

The European Union Artificial Intelligence ActThe Artificial Intelligence Act (AI Act) represents a groundbreaking regulatory framework established by the European Union to oversee artificial intelligence (AI). This landmark legislation aims to harmonize AI regulations across EU member states, promoting innovation while safeguarding fundamental rights and addressing potential risks associated with AI technologies.The AI Act was proposed by the European Commission on April 21, 2021, as a response to the rapid advancements in AI and the need for a cohesive regulatory approach. After rigorous deliberations and revisions, the European Parliament passed the Act on March 13, 2024, with a significant majority. Subsequently, the EU Council unanimously approved the Act on May 21, 2024, marking a critical milestone in the EU's regulatory landscape.The AI Act covers a broad spectrum of AI applications across various sectors, with notable exceptions for AI systems exclusively used for military, national security, research, and non-professional purposes. Unlike the General Data Protection Regulation (GDPR), which confers individual rights, the AI Act primarily regulates AI providers and professional users, ensuring that AI systems deployed within the EU adhere to stringent standards.A pivotal element of the AI Act is the establishment of the European Artificial Intelligence Board. This body is tasked with fostering cooperation among national authorities, ensuring consistent application of the regulations, and providing technical and regulatory expertise. The Board’s role is akin to that of a central hub, coordinating efforts across member states to maintain uniformity in AI regulation.In addition to the European Artificial Intelligence Board, the AI Act mandates the creation of several new institutions:AI Office: Attached to the European Commission, this authority oversees the implementation of the AI Act across member states and ensures compliance, particularly for general-purpose AI providers.Advisory Forum: Comprising a balanced selection of stakeholders, including industry representatives, civil society, academia, and SMEs, this forum offers technical expertise and advises the Board and the Commission.Scientific Panel of Independent Experts: This panel provides technical advice, monitors potential risks associated with general-purpose AI models, and ensures that regulatory measures align with scientific advancements.Member states are also required to designate national competent authorities responsible for market surveillance and ensuring AI systems comply with the Act's provisions.The AI Act introduces a nuanced classification system that categorizes AI applications based on their potential risk to health, safety, and fundamental rights. The categories include:1. Unacceptable Risk: AI systems that pose severe risks are outright banned. This includes AI applications manipulating human behavior, real-time remote biometric identification (e.g., facial recognition) in public spaces, and social scoring systems.2. High Risk: AI applications in critical sectors such as healthcare, education, law enforcement, and infrastructure management are subject to stringent quality, transparency, and safety requirements. These systems must undergo rigorous conformity assessments before and during their deployment.3. General-Purpose AI (GPAI): Added in 2023, this category includes foundation models like ChatGPT. GPAI systems must meet transparency requirements, and those with high systemic risks undergo comprehensive evaluations.4. Limited Risk: These applications face transparency obligations, informing users about AI interactions and allowing them to make informed choices. Examples include AI systems generating or manipulating media content.5. Minimal Risk: Most AI applications fall into this category, including video games and spam filters. These systems are not regulated, but a voluntary code of conduct is recommended.Certain AI systems are exempt from the Act, particularly those used for military or national security purposes and pure scientific research. The Act also includes specific provisions for real-time algorithmic video surveillance, allowing exceptions for law enforcement under stringent conditions.The AI Act employs the New Legislative Framework to regulate AI systems' entry into the EU market. This framework outlines "essential requirements" that AI systems must meet, with European Standardisation Organisations developing technical standards to ensure compliance. Member states must establish notifying bodies to conduct conformity assessments, either through self-assessment by AI providers or independent third-party evaluations.Despite its comprehensive nature, the AI Act has faced criticism. Some argue that the self-regulation mechanisms and exemptions render it less effective in preventing potential harms associated with AI proliferation. There are calls for stricter third-party assessments for high-risk AI systems, particularly those capable of generating deepfakes or political misinformation.The legislative journey of the AI Act began with the European Commission's White Paper on AI in February 2020, followed by debates and negotiations among EU leaders. The Act was officially proposed on April 21, 2021, and after extensive negotiations, the EU Council and Parliament reached an agreement in December 2023. Following its approval in March and May 2024 by the Parliament and Council, respectively, the AI Act will come into force 20 days after its publication in the Official Journal, with varying applicability timelines depending on the AI application type.

24 Maj 20246min

Populärt inom Business & ekonomi

framgangspodden
varvet
badfluence
uppgang-och-fall
rss-borsens-finest
svd-ledarredaktionen
avanzapodden
lastbilspodden
rikatillsammans-om-privatekonomi-rikedom-i-livet
rss-kort-lang-analyspodden-fran-di
rss-dagen-med-di
fill-or-kill
affarsvarlden
borsmorgon
dynastin
kapitalet-en-podd-om-ekonomi
tabberaset
montrosepodden
borslunch-2
rss-inga-dumma-fragor-om-pengar