Beltug advocates shared responsibility for AI
In order to balance the strong potential of Artificial Intelligence with security and fundamental rights, the European Union is developing a regulation for AI. Considering how important this is for our members and economy in general, Beltug and sister organisations Cigref (France), CIO Platform (Netherlands) and VOICE (Germany) have been closely following discussions at the EU level, with a focus on ensuring that responsibility for AI systems is shared between the users and the vendors.
05 / 07 / 22
In the proposal, those who use systems are held to be responsible. However, in ICT, users are generally not allowed to look ‘under the hood’ of the AI systems; in fact, they have no way to do so. They also typically rely on vendors’ building blocks, combining them to suit their needs. Thus, while the users ‘train’ the AI models, they don’t necessarily have insights into all the code.
Beltug and our sister organisations believe that users and vendors should share responsibility, but only in cases where the user has been given all of the necessary information from the vendor. We have written to Members of European Parliament (MEP) Tom Vandenkendelaere and Geert Bourgeois, and contacted the Cabinets of Secretary of State Michel and Vice-minister De Sutter on the matter. We are also following discussions between member states.
So far, 3312 changes have been suggested by the European Parliament to the proposals. Kai Zenner, the head of office of MEP Axel Voss who is working on the act, has a good blog for following the efforts of the European Parliament on this topic.
The priorities of the Presidencies
Currently, the Presidency of the Council of the European Union is switching from France to the Czech Republic. EURACTIV has published an overview of the French Presidency’s latest compromise text on the section of the AI Act related to AI regulatory sandboxes. According to the overview, this version has been shortened and simplified to provide more flexibility for Member States. The French Presidency further suggested that the relevant authorities consider the testing conducted in the sandboxes for compliance assessments. Testing of high-risk systems in real-world conditions would be out of the scope of sandboxes under the French proposal.
The upcoming Czech Presidency of the Council of the EU shared a discussion paper with an overview of the issues regarding the AI Act on which it will focus. These include the definition of AI, defining which use cases are considered high-risk, the governance and enforcement framework, and whether national security should be exempt in the regulation.