EU is proposing one of the first laws globally to regulate the use of artificial intelligence for applications like hiring and policing
European officials want to limit police use of facial recognition and ban the use of certain kinds of AI systems, in one of the broadest efforts yet to regulate high-stakes applications of artificial intelligence.
The European Union’s executive arm proposed a bill Wednesday that would also create a list of so-called high-risk uses of AI that would be subject to new supervision and standards for their development and use, such as critical infrastructure, college admissions and loan applications. Regulators could fine a company up to 6% of its annual world-wide revenue for the most severe violations, though in practice EU officials rarely if ever mete out their maximum fines.
The bill is one of the broadest of its kind to be proposed by a Western government, and part of the EU’s expansion of its role as a global tech enforcer.
In recent years, the EU has sought to take a global lead in drafting and enforcing new regulations aimed at taming the alleged excesses of big tech companies and curbing potential dangers of new technologies, in areas ranging from digital competition to online-content moderation.
The bloc’s new privacy law, the General Data Protection Regulation, helped set a template for broadly applied rules backed by stiff fines that has been followed in some ways by other countries—and some U.S. states.
“Our regulation addresses the human and societal risks associated with specific uses of AI,” said Margrethe Vestager, executive vice president at the European Commission, the EU’s executive arm. “We think that this is urgent. We are the first on this planet to suggest this legal framework.”
Wednesday’s proposal faces a long road—and potential changes—before it becomes law. In the EU, such laws must be approved by both the European Council, representing the bloc’s 27 national governments, and the directly elected European Parliament, which can take years.
Some digital-rights activists, while applauding parts of the proposed legislation, said other elements appear too vague and offer too many loopholes. Some others aligned with industry, argued that the EU’s proposed rules would give an advantage to companies in China, which wouldn’t face them.
“It’s going to make it prohibitively expensive or even technologically infeasible to build AI in Europe,” said Benjamin Mueller, a senior policy analyst at the Center for Data Innovation, part of a tech-aligned think tank. “The U.S. and China are going to look on with amusement as the EU kneecaps its own startups.”
Some tech-industry lobbyists, however, said they were relieved the draft wasn’t more draconian, and applauded the approach of imposing strict oversight on only some types of so-called high-risk uses of AI, such as software for critical infrastructure and algorithms that police use to predict crimes.
“It’s positive that the commission has taken this risk-based approach,” said Christian Borggreen, vice president and head of the Brussels office at the Computer & Communications Industry Association, which represents a number of large technology companies including Amazon, Facebook and Google.
There are a handful of specific practices that face outright bans in the bill. In addition to social credit systems, such as those used by the Chinese government, it also would ban AI systems that use “subliminal techniques” or take advantage of people with disabilities to “materially distort a person’s behavior” in a way that could cause physical or psychological harm.
While police would be generally blocked from using what is described as “remote biometric identification systems”—such as facial recognition—in public places in real time, judges can approve exemptions that include finding abducted children, stopping imminent terrorist threats and locating suspects of certain crimes, ranging from fraud to murder.
“The list of exemptions is incredibly wide,” said Sarah Chander, a senior policy adviser at European Digital Rights, a network of nongovernmental organizations. Such a list “kind of defeats the purpose for claiming something is a ban.”
Large banks have pioneered the work of unpicking their artificial intelligence algorithms to regulators, as part of government efforts to prevent another global credit crisis. That makes them a test case for how a broader range of companies will eventually have to do the same, according to Andre Franca, a former director at Goldman Sachs’ model risk management team, and current data science director at AI startup causaLens.
In the past decade, for instance, banks have had to hire teams of people to help present regulators with the mathematical code underlying their AI models, in some cases comprising more than 100 pages per model, Dr. Franca said.
Providers of AI systems used for purposes deemed high risk would need to provide detailed documentation about how their system works to ensure it complies with the rules. Such systems would also need to show a “proper level of human oversight” both in how the system is designed and put to use, and follow quality requirements for data used to train AI software, Ms. Vestager said.
The EU could also send teams of regulators to companies to scrutinize algorithms in person if they fall into the high-risk categories laid out in the regulations, Dr. Franca said. That includes systems that identify people’s biometric information—a person’s face or fingerprints—and algorithms that could impact a person’s safety. Regulators from the ECB often personally scrutinize the computer code of banks over several days of workshops and meetings, he added.
The EU says most uses of AI, including videogames or spam filters, would have no new rules under the bill. But some lower-risk AI systems, such as chatbots, would need to inform users they are not real people.
“The aim is to make it crystal clear that as users we are interacting with a machine,” Ms. Vestager said.
Deepfakes, or software that puts a person’s face on top of another’s body in a video, will require similar labels. Ukraine-based NeoCortext Inc. which makes a popular app for face-swapping called Reface, said it was already working on labeling but would try to follow the EU’s guidelines.
“There is a challenge now for fast-growing startups to develop best practices and formalize standard codes of practice,” said Neocortext’s chief executive, Dima Shvets.
The new regulations might not necessarily have the same impact as GDPR, simply because AI is so broadly defined, according to Julien Cornebise, an honorary associate professor in computer science at University College London and a former research scientist at Google.
“AI is a moving goal post,” he said. “Our phones are doing things everyday that would have been considered ‘AI’ 20 years ago. There’s a risk that could cause the regulation to be either lost in definition or obsolete quickly.”
Fonte: The Wall Street Journal
How TinyML Makes Artificial Intelligence Ubiquitous
Agriculture Industry Moves Forward Using Artificial Intelligence (AI) To Improve Crop Management
Artificial Intelligence And The Energy Sector: Huge Potential, Tough Questions
Você quer acompanhar nosso conteúdo? Então siga nossa página no LinkedIn!
- Inteligência Artificial como motor da Transformação Digital
- Gartner alerta que 80% dos conselheiros acreditam que as práticas e estruturas atuais dos Conselhos de Administração são inadequadas para supervisionar IA
- Neo4j ultrapassa US$ 200 milhões em receita e acelera liderança em tecnologia de grafos orientada por GenAI