What’s next for AI? Will the European Union regulate the operation of artificial intelligence?

Sophia, the humanoid robot created by Hanson Robotics Ltd. Photo flickr (CC BY 2.0).
Sophia, the humanoid robot created by Hanson Robotics Ltd. Photo flickr (CC BY 2.0).

„Chatbot persuaded the father of two children to commit suicide.” Such headlines could be found in Belgian newspapers recently. The man took his own life after conversations with the chatbot „Eliza”. For some time, he had been sharing his concerns about global warming with AI. The family of the deceased believes that the bot worsened his mental state and believes he would not have taken his own life if not for the conversations with AI.

As an old Chinese curse goes, „May you live in interesting times.” It can be assumed that we are living in the most interesting times since the Industrial Revolution.

In 2019, it was revealed that Dutch tax authorities used algorithms to profile individuals who might be inclined to defraud caregiver benefits. The system was designed to detect fraud at an early stage.

However, the authorities penalized families based on a mere suspicion of potential fraud, generated by the algorithm’s risk indicators. This led to tragedies for thousands of families who were often already in difficult life situations.

An investigation revealed that for over two decades, tax authorities had focused their attention on individuals on low incomes and of Moroccan or Turkish descent. According to the Dutch newspaper „Trouw,” the risk profile criteria were developed by the tax authority. Most likely, the AI models were trained on databases that were biased from the start, and these biases were transferred to the generated risk indicators.

What is most disturbing is that the scandal was caused not by a corporation but by a tax funded institution. The role of the state is to ensure equal treatment and access to social assistance for its citizens. Yet, we have an example based on simplifications and stigmatization. Instead of creating standards, the state resorted to the lowest prejudices.

Companies and governments worldwide are keen on using algorithms and artificial intelligence to automate their systems. The Dutch example shows the harm that a system without proper safeguarding and supervision can cause.

We could cite several more situations where the lack of proper legal regulations and supervision of artificial intelligence led to the abuse of power by the authorities.

AI ACT – Will the European Union regulate AI operations?

The European Union is working on the AI ACT, which consists of regulations and principles for the use of artificial intelligence, aiming to limit the harm it can cause to people.

It is an approach based on common ethical values important to the Union, which sets it apart from the Chinese approach based on total state control and limited individual rights, as well as from the American model that overlooks the rights of the most vulnerable citizens. Which approach will dominate in Europe we will find out soon.

The European Commission aims to create a European regulatory environment for the operation of artificial intelligence. The goal of the AI Act is to regulate the technological market not through the type of services or products but by considering the level of risk posed by AI. The legislation distinguishes four types of risk:

  • Unacceptable (the highest level associated with manipulation risk)
  • High
  • Limited
  • Minimal

The use of systems with unacceptable risk will be prohibited in the European Union. All companies and organizations using AI solutions will have to comply with the regulations.

What does the law mean for individuals?

The law assumes that the responsibility for the operation of high-risk systems falls on the providers, i.e., the creators of AI systems, although the technology lobby tried to avoid this.

The companies will be responsible for verifying that their tools do not employ artificial intelligence for prohibited practices. They will also be tasked with implementing data control and cybersecurity systems and determining how their solution affects humans.

How can we understand the situation when representatives of technology giants like Google dismiss experts working in AI ethics teams? Timnit Gebru and Margaret Mitchell raised concerns that applications from leading companies are susceptible to sexism and racism. The researchers stated that the lack of diversity among employees can lead to the biases of software and system developers being reflected in the developed artificial intelligence.

Timnit Gebru was involved with Google in researching the risks associated with AI software, specifically the so-called large language models. This machine learning technology is based on artificial neural networks. The quality of the text generated by AI is reportedly so high that it is difficult to distinguish from the human language.

The models are already so effective that they can propagate racial and gender biases, capable of imitating human language. Recipients may not be aware that they are not in contact with a human but with artificial intelligence.

And here lies the threat of spreading misinformation through artificial intelligence.

Google and other giants do not want regulation

The technological giants have gathered their resources and are lobbying to effectively weaken the character of AI ACT and delay its implementation. Companies, especially American ones, are trying to reduce the requirements for high-risk systems. Lobbyists are taking action to exclude general-purpose artificial intelligence (General Purpose AI) systems from regulation and restrict the new law only to third-party companies that use such services. This would shift responsibility away from the creators of these systems and onto the shoulders of user companies.

What about the limitation of our freedom by artificial intelligence?

Do we protest in the streets and express outrage that it takes away our freedom? In practice, it may seem that technology companies providing products and services using artificial intelligence expand our possibilities and the range of choices. However, most of us are already aware that these companies, through AI tools, manipulate our behaviors and purchasing preferences. Will AI ACT change anything in this regard?

Although the provisions of the General Data Protection Regulation (GDPR) bring a smile to recipients’ faces, in reality, thanks to these regulations, our personal data is better protected than that of Americans or Chinese. The same may be true when the AI ACT comes into force.

Europe has taken up the challenge and is trying to regulate the operation of artificial intelligence while protecting the basic rights of Europeans. Will it succeed? We will find out soon enough.

Marta Stępniak, Paweł Marczak

Translated from Polish by Agnieszka Nikodem, July 2023

You might also like:

monitor

Saving the world is boring

We talk to Domen Savič (www.drzavljand.si/en/) about digital democracy and work in the Slovenian third sector. Domen Savic – director of Drzavljan D…

Read more articles in English