The European Union Artificial Intelligence Act is the world’s first comprehensive AI law and is expected to launch in stages during the second half of 2024. The Act’s focus on trust, security, and transparency will drive new levels of human oversight and regulatory compliance for AI within the EU. The Act also has extraterritorial scope, meaning that AI developers and deployers around the world must be prepared to adhere to its requirements.
Businesses everywhere will need to navigate an increasingly complex global landscape of regulations if they are to successfully leverage the promise of generative AI. This moment represents a significant opportunity to position IBM’s AI governance solutions and drive growth.
The EU AI Act takes a risk-based approach to compliance.
The Act does not regulate AI as a technology but regulates uses of AI according to their level of risk: low, high, and unacceptable.
Low risk uses are unlikely to impact health, safety and fundamental rights and so have few requirements under the Act beyond transparency obligations.
High-risk uses could negatively affect health or fundamental rights and must adhere to additional regulatory obligations. This includes AI applications used in critical areas such as transport, education, security components, employment, essential services, law enforcement, migration and justice that can have potential impact on citizens’ lives, rights and access to opportunities.
Unacceptable uses are prohibited with very few exceptions. This includes facial and emotion recognition or AI systems used to manipulate or exploit people’s vulnerabilities.
General-purpose AI systems (GPAI), including generative AI and foundation models, are also regulated by the Act, with two levels of regulation: high-impact and low-impact systems. For high-impact GPAI models with systemic risk, there will be very stringent obligations.
The Act provides a grace period for compliance of up to three years. Prohibited systems will have a shorter, six-month period to comply. General-purpose AI models are subject to a 12-month period to comply with transparency and governance requirements. Rules for high-risk AI systems integrated in products as part of a safety component will apply after 36 months.
IBM is actively engaging with policymakers to help shape AI regulation.
IBM has long supported a use-based precision regulation approach to AI. IBM’s collaboration with the European Union AI Commission’s High Level
Expert Group and the Organisation for Economic Cooperation and Development (OECD), which developed the standards on which the EU AI Act is based, position IBM to help clients navigate this shift in the regulatory landscape.
IBM’s Integrated Governance Program enables adoption of responsible AI at scale and provides a world-class proof point for clients looking to do the same.
Developed for use by the IBM Office of Privacy and Responsible Technology and underpinned by IBM’s mature principles-based governance framework and IBM technology, our internal Integrated Governance Program (IGP) helps to efficiently manage over 5,500 internal applications and processes and has been expanded to cover AI governance.
IBM has the products to help clients accelerate their AI governance in support of compliance for the EU AI Act and other impending regulations. The IBM watsonx system complies with the EU AI regulation in the following areas, fulfilling the detailed provisions listed below:
Model risk governance:
- Article 5 – Prohibited practices
- Article 6/7 – High-risk AI systems
- Article 9 – Risk management system
- Article 13 – Transparency and provision of information to users
- Article 17 – Quality management system
- Article 19/43 – Conformity assessment
- Article 21 – Corrective actions
- Article 22 – Duty of information
- Article 23 – Cooperation with competent authorities
- Article 29 – Obligations of users of high-risk AI systems
- Article 30 – Notifying authorities
- Article 52 – Transparency obligations
- Article 60 – EU database for high-risk AI systems
- Article 62 – Reporting of serious incidents
- Article 69 – Codes of conduct
Deploy:
- Article 10 – Data and data governance
- Article 12/20 – Record keeping
- Article 15 – Accuracy, robustness and cybersecurity
Evaluation & Monitoring:
- Article 15 – Accuracy, robustness and cybersecurity
- Article 61 – Post-market monitoring
Model Documentation:
- Article 11 – Technical documentation
- Article 13 – Transparency and information to users
- Article 18 – Documentation keeping
Build:
- Article 10 – Data and data governance
- Article 15 – Accuracy, robustness and cybersecurity