Search engines, social media tag suggestions, Deep Blue’s 1997 victory over Garry Kasparov, your spam filter, facial recognition…do you see the common thread yet? The answer is artificial intelligence.
As these examples show, the technology has been around for many years and is already used widely. It can help speed up the development of medicines by quickly determining the structure of proteins, or facilitate information access by making natural language processing.
But it is also being used for real-time observation of millions of people, through facial recognition, and even for fraudulent activities, through deep fakes.
To ensure that AI systems used in the European Union are safe and respect the laws, the European Commission proposed a regulatory framework on Artificial Intelligence, in April 2021. Beltug, together with its sister CIO associations Cigref (France), CIO Platform (Netherlands) and Voice (Germany), has developed a Position on this proposal.
A step towards safe, responsible AI
Beltug agrees that the framework set out in the regulation is appropriate for the safe, responsible, and sustainable use of AI systems in Europe. The European Commission proposes a risk-based approach with classes of AI systems; the intensity of requirements varies depending on the level of risk to the health and safety of EU citizens and their fundamental rights.
This risk-based approach provides clarity and oversight, without creating unnecessary market entry barriers for low-risk AI applications. Beltug is pleased that the AI Act extends to all AI systems that are on the market or have their output in the European Union.
5 concerns to address
In our Position, we express serious concerns about five aspects of the proposed AI Act:
The EC proposal is now being discussed by the co-legislators, the European Parliament and the Council (EU Member states). More information on the subject and the legislative process can be found on the European Parliament’s website.