We’re getting your voice heard on the AI Act
Beltug and its sister organisations CIO Platform Nederland (Netherlands), Cigref (France), and VOICE (Germany) are working to ensure your voice (the voice of the business user) is being heard by the people who will decide how artificial intelligence, including general purpose AI, will be regulated in the European Union. We sent a letter to the European Parliament calling on them to take a value-chain approach when regulating general purpose AI.
05 / 05 / 23
The AI Act will be the first general regulation on artificial intelligence in the world. Discussions are taking place right now, and it is expected to be adopted by the end of 2023. Two years of implementation will follow, mostly for appointing regulators, but the decisions will apply immediately to all AI in the European Union.
A ‘risk-based’ approach to requirements
The proposed act aims to provide AI developers, deployers and users with clear requirements and obligations regarding specific uses of AI. At the same time, it seeks to reduce administrative and financial burdens for businesses, especially small and medium-sized enterprises (SMEs).
The approach will be risk-based. For example, the use of AI for social scoring or in real-time face recognition by governments will be banned. Furthermore, AI in critical infrastructure, education, safety components, employment, law enforcement and justice, migration and democratic processes is considered ‘high risk’ for ‘fundamental values’, and will thus be strictly regulated. However, most AI applications have limited or minimal risk: video games, spam filters, auto correctors, social media, navigation, search recommendation, chatbots at websites, voice assistance, financial fraud detection mechanisms, etc. For this long list, nothing will change.
However, the risk score for general purpose AI can only be evaluated once it is actually being used. When ChatGPT stormed the world, ensuring the proper regulation of that type of AI suddenly became the priority for organisations, including Beltug members, who were looking at using general purpose AI.
Beltug and sister organisations reach out to European Parliament
To ensure your needs and voice are heard, our letter calls on the European Parliament ‘to take a value chain approach’. It is essential that general purpose AI providers share information about it with you, as the downstream providers, and ensure that it complies with specific design standards.
Once general purpose AI is implemented, and its risk category becomes clear, requirements such as testing, documentation and accountability frameworks can be defined. In the letter, we stated that as the final provider, you need to know:
- the capabilities of the system and the purposes for which the system is built and trained;
- the (types of) data with which the system was trained and, if relevant, was not trained;
- the quality of the datasets (as you need to prove that yourself)
- the level of cyber security, accuracy and robustness
- any relevant limitations of the system.
We also advocated for design requirements including standards on cyber security, accuracy and robustness, and a requirement for providers of general purpose AI to maintain a risk management system. Without such procedures, users will have a blind spot regarding the quality of the high-risk AI; particularly users who become providers by developing and marketing an AI system using general purpose AI as components.
What’s next?
We will monitor whether our suggestions have been adopted, and keep you updated. But in the meantime, as an organisation, you can already take steps:
- Get your legal department involved
- Make sure you know what AI your organisation uses and what AI falls within the high-risk category
- Set up a process to design, develop, register and monitor high risk AI, and perform conformity assessments repetitively
- Get a view on what information you need from your AI suppliers
- Join the Beltug Public Affairs task force if you are involved in public affairs (and your membership plan allows), or just reach out to the Beltug team
- Review the Beltug session on the AI Act from 20 April