Harmful AI rules: Now brought to you by Europe & Co., Inc.

Companies’ standards experts will help shape the EU’s crucial rules for artificial intelligence.

Companies, many of them from outside Europe, will play a key role in deciding the details of the European Union’s planned rules for potentially dangerous artificial intelligence. But corporate influence over decisions that risk human rights has some activists worried.

The EU’s new law on artificial intelligence aims to protect people from harmful AI by cracking down against discriminatory, opaque and uncontrolled algorithms that are increasingly being used to make life-changing judgments on immigration, policing, social benefits and schooling.

The rules don’t target “normal kinds of products,” but aim to halt “potential violation of constitutional rights, whether it’s about the use of biometric surveillance, discrimination or your access to employment and education,” said Iverna McGowan, the director of the Center for Democracy and Technology in Europe. She criticizes entrusting “private-sector dominated” European standards groups to shape the final rules.

But that’s the way the EU’s Artificial Intelligence Act is designed. It leans on industry forums, such as CEN-CENELEC and ETSI, to outline the technical instructions that ensure AI systems are trained on unbiased data and ultimately determine how much human oversight is needed and what needs to be done to prevent the software from going off-track.

France-based ETSI counts over 900 members, including tech giants like Microsoft and Facebook’s parent Meta Platforms as well as European defense companies like Thales and Chinese telecoms equipment provider Huawei. The ETSI group coordinating AI work is led by executives from Japanese telecoms company NEC, China-based Huawei and U.S. chipmaker Intel.

The ETSI organization has what researchers in the United Kingdom have described as a “pay-to-play” model that gives members paying higher subscription fees more votes in meetings. That makeup can give an advantage to larger and richer corporations, and to global companies able to sign up many national chapters as distinct members. Huawei, for instance, is represented by six members (from Huawei Technologies to HUAWEI TECH. GmbH).

CEN-CENELEC includes industry standards experts from 34 European countries, including some from non-EU countries such as the U.K., Serbia and Turkey. The group ultimately represents thousands of EU and non-EU companies.

However, some say that non-EU company participation will also draw them into embracing European industrial standards, since companies building highly risky AI systems will have to evaluate their own compliance by following global industry standards.


Conflict of interest


Standards organizations set the rules that make products and services work. They draw up technical specifications to determine the quality and safety of everything from teddy bears and batteries to complicated machinery and even data transfers. European groups like CEN-CENELEC and ETSI have been crucial in getting telecom companies to agree on global shared standards for mobile networks and cybersecurity.

But using the industry’s bureaucrats to figure out how to bring ethics into AI is a step too far for some when companies’ primary aim is to grab some of a global AI market — estimated to be worth more than €1 trillion by 2029.

“AI is a hugely profitable endeavor that is reshaping multiple areas of society and is not going to be fixed by a piece of legislation that treats it like a toy, a radio or a piece of protective equipment,” said Michael Veale, an associate professor in digital rights and regulation at University College London.

Companies focus on getting “products to the market but the AI Act seeks to limit harms,” said Kris Shrishak, a technology fellow at the Irish Council for Civil Liberties.

Engineers and technical experts in standards groups will likely struggle with ethical questions when they are tasked with translating an AI law into standards, Shrishak said. They will have to decide what constitutes fair and representative data sets for algorithms, the extent of documentation, transparency and human control for an AI program.

These decisions are critical. Flawed data at the heart of AI programs can reinforce social inequalities and prejudices and have far-reaching consequences like misdiagnosing diseases for minority racial groups and limiting job opportunities for women.
Ethical and geopolitical standards

Standards groups don’t see themselves as vessels of corporate influence. Indeed they see that their working methods show that they can reach “solutions that take all points of view into account,” according to Markus Mueck, Intel engineer and first vice chair of ETSI’s coordinating group on AI.

They’re not being asked “to start developing standards about ethics” but merely to implement them, said Constant Kohler, who manages the work on AI for CEN-CENELEC.

The European Commission said that it would have the last say in checking the standards drafted for the AI Act. It also wants European standards groups to involve human rights campaigners in the decision-making on harmful AI.

At the same time, standards groups are under pressure to overhaul how they work as geopolitical tensions brew over supply chains. The Commission is pushing standards organizations to limit the influence of large companies and reform their governance by 2022 to “fully represent the public interest.” Effectively, that means drafting more nongovernmental organizations and curbing the power of non-European companies.

Renew MEP Dragoș Tudorache, one of the European Parliament’s point persons on the AI rules, believes industry should continue to drive standard-setting, but wants rules to go a little further in banning companies controlled by some authoritarian regimes from industry standard-setting.

“Industry is not the enemy,” he said, but it now has “increased responsibility… in forging Europe’s digital path.”

×