Content Tags

There are no tags.

AI Regulation is a Chance to Fix Deeper Problems – Part I

Authors
Jean-François Gagné

(This article originally appeared on JF Gagne's blog here.)

Self-regulation doesn’t work. The interests of business are too narrow, and it is too easy for them to say better practices are too hard. Our narratives about business and technology further abstract away the power the makers and owners of machines have to make different choices. The only counter-balance is government intervention, influenced by civic action. Unfortunately, that intervention usually comes very long after people have made the problems clear. When it does come, the changes made are poorly negotiated but touted as a great victory for society as a whole.

The Artificial Intelligence Act (AIA), Europe’s new bill for regulating AI proposed back in April, could be a different story. Though we may be late in addressing the risks of digital technology in general, it is timely for addressing AI while the tech is still in its early days of development. This is the next big step for AI regulation and it is making waves in the tech world. 

Before getting into the regulation and what to expect, I want to help frame the significance of the opportunity with this regulation by looking back at another moment of industrial regulation: the fight for the 8-hour workday. 

(Just a quick note, this post is a bit longer than usual, after all it’s been awhile… If you just want to read about what the AIA is and its political development and reception, feel free to skip to “What to expect from the AIA”.)

The fight for the 8-hour workday 

The civic movement for the 8-hour workday had various upstarts throughout the Industrial Revolution as the robotic pace of machinery clashed with traditional 10-16 hours workdays. As early as 1817, Robert Owen in the UK made up his famous slogan “Eight hours’ labour, Eight hours’ recreation, Eight hours’ rest”, which would help the idea catch fire in the US. A version of that phrase would be chanted at the deadly Haymarket Riots in Chicago in 1886 that became the origins for May Day in the US. But even with swelling social movements and bloody protests, it would be well into the next century that the US Congress would formalize the 8-hour workday in law in 1937. 

Indeed, the real tipping point didn’t come until Henry Ford, against the trend of his competitors, implemented the 8-hour workday in 1914 and saw both productivity rise and profits double in just two years. He was able to do this, and in his own terms had to do it, as a result of his assembly line innovation that put to use industrial machinery: The jobs had become so simple and basic, that the mundane work led to high and costly turnover. His change to shorter work hours also came with doubled pay and profit sharing, nullifying both the dissatisfaction his workers had felt and the skepticism of his competitors. In the face of such compelling evidence, the rest of the industry would soon follow suit. 

Henry Ford is far from a perfect hero, his story is a complex one, but his is one of the creation stories of modern industry that can also highlight useful lessons for regulating technology. 

Lesson 1: Fixing a problem created by the dynamics of technology is a means to, and truly requires, fixing a societal problem. Ford’s intervention not only helped relieve a problem created by technology, high turnover for boring work, it also helped relieve the grueling and dangerous 10-16 hour workday that had been a part of society long before the industrial revolution.

Lesson 2: Technology will make systemic societal issues worse if not accounted for. While Ford did alleviate the societal problem of the long workday, plenty of his practices left many problems unresolved or exacerbated others. Even the innovative assembly line that helped enable the change led to terrible injuries from repetitive motion. Many of the norms of industrial work, such as respect and value of the worker, were so ingrained that a relatively little change seemed revolutionary.

Lesson 3: Short of all-out revolution, change takes all three sectors–public, private, and civil–which means each needs a satisfying outcome. Unfortunately, this stalemate can allow for the principle in the second lesson to compound, where many of the societal problems both worsen and become more entrenched through the technology. The impasse narrows the scope of convenient interventions and lessens the net positive effect of innovation for society. Arguably, the interventions should come sooner, and be more ambitious when the window of willingness finally arrives.

We have just such a moment of willingness with the AIA. The nature of the bill also has the potential to address the deeper issues we must face in order to successfully regulate tech.

What to expect from the AIA

A slow build for effect

When the AIA came out, you could feel the rumblings through the legal departments of the tech world. At first there was the usual reaction from lobbyists to kill the bill, and corporate comms departments prepped their opening proclamations that they would “assess it” to kick off the pageantry of “reasonable discussion” about the risks and benefits of the regulation. But before even being able to get started on their song and dance, companies are learning from their government relations departments that this bill will not be killed. Rather, they will need to swallow this pill and instead focus on helping their customers do so too.

The bill is a fresh example of what Ana Bradford dubbed the “Brussels Effect” in her book of the same name last year. Bradford has documented in case study after case study, in food safety, environmental protection, the automobile industry, and more, how the market regulations developed in Europe become global standards. Due to the chasm between the regulatory environments of China (more authoritarian) and America (more liberal), and Europe’s position as the second largest market in the world, multinational organizations align their global policies to Europe rather than undertake costly differentiation by region. Interestingly, those very same lobbyists whose knee jerk reactions were to kill the bill may find themselves instead advocating for the same regulation at home so as to not be undercut by competitors operating only locally.

GDPR is perhaps the best example of this, as the digital economy has borders both nowhere and everywhere. For all its faults as a blunt object, Google, Amazon, Facebook and others each have a single global privacy policy that (mostly) reach the standards of the EU. It’s forced them to engage, and GDPR has become sharper in the years since it was passed. 

The AIA has been designed with full intent of the Brussels Effect, along with the lessons learned from deploying GDPR. It has been years in the making. Since Europe launched its AI Strategy in 2018, its High Level Expert Group on AI (HLEG AI) has been developing guidance for regulation and investment in collaboration with the European community. I chaired the development of the seven ethical principles that underpin today’s regulation, which we published in 2019. 

You can see my post here that gives a more accessible breakdown of them, why we need them, and how we came up with them. But here’s the core stuff:

3 basic requirements

  1. Lawful –  respecting all applicable laws and regulations
  2. Ethical – respecting ethical principles and values
  3. Robust – both from a technical perspective while taking into account its social environment

7 principles

  1. Human Agency and Oversight forms the first requirement, grounded in an adherence to fundamental and human rights and the necessity for AI to enable human agency.
  2. Technical Robustness and Safety concerns itself with the development of the AI system and focuses both on the resilience of the system against outside attacks (e.g.adversarial attacks) and failures from within, such as a miscommunication of the system’s reliability.
  3. Privacy and Data Governance bridges responsibilities between system developers and deployers. It addresses salient issues such as the quality and integrity of the data used in developing the AI system, and the need to guarantee privacy throughout the entire life cycle of the system.
  4. Transparency demands that both technical and human decisions can be understood and traced.
  5. Diversity, Non-Discrimination and Fairness are requirements that ensure that the AI system is accessible to everyone. These include, for example, bias avoidance, the consideration of universal design principles and the avoidance of a one-size-fits-all approach
  6. Societal and Environmental Well-Being is the broadest requirement and includes the largest stakeholder: our global society and the environment. It tackles the need for AI that is sustainable and environmentally friendly, as much as its impact on the democratic process.
  7. Accountability complements all the previous requirements, as it is relevant before, during and after the development and deployment of the AI system.

*These are shortened adaptations of the final principles, go here for the full wording.

These principles have gone through numerous iterations of feedback from the 4000 members of Europe’s AI Alliance of organizations from across industry and civil society, and informed how we shaped a final Assessment List for Trustworthy AI (ALTAI) that was then piloted with 350 organizations over the last year. All of this work has informed the AIA you see today; it is not going away due to a few more companies needing a chance to “assess it”.

What the AIA actually says

The work is just beginning. If you read the AIA, you’ll see it is still quite broad. Currently, it categorizes by risk: Unacceptable, High, Limited, and Minimal. The main focus is on High-risk uses and prescribing obligations for them. I’ll use the direct language here from the announcement page, which is carefully assessed to match the language of the bill.

Unacceptable risk: AI systems considered a clear threat to the safety, livelihoods and rights of people will be banned. This includes AI systems or applications that manipulate human behaviour to circumvent users’ free will (e.g. toys using voice assistance encouraging dangerous behaviour of minors) and systems that allow ‘social scoring’ by governments.

Limited risk, i.e. AI systems with specific transparency obligations: When using AI systems such as chatbots, users should be aware that they are interacting with a machine so they can take an informed decision to continue or step back.

Minimal risk: The legal proposal allows the free use of applications such as AI-enabled video games or spam filters. The vast majority of AI systems fall into this category. The draft Regulation does not intervene here, as these AI systems represent only minimal or no risk for citizens’ rights or safety.

High-risk: AI systems identified as high-risk include AI technology used in:

  • Critical infrastructures (e.g. transport), that could put the life and health of citizens at risk;
  • Educational or vocational training, that may determine the access to education and professional course of someone’s life (e.g. scoring of exams);
  • Safety components of products (e.g. AI application in robot-assisted surgery);
  • Employment, workers management and access to self-employment (e.g. CV-sorting software for recruitment procedures);
  • Essential private and public services (e.g. credit scoring denying citizens opportunity to obtain a loan);
  • Law enforcement that may interfere with people’s fundamental rights (e.g. evaluation of the reliability of evidence);
  • Migration, asylum and border control management (e.g. verification of authenticity of travel documents);
  • Administration of justice and democratic processes (e.g. applying the law to a concrete set of facts).

High-risk AI systems will be subject to strict obligations before they can be put on the market:

  • Adequate risk assessment and mitigation systems;
  • High quality of the datasets feeding the system to minimise risks and discriminatory outcomes;
  • Logging of activity to ensure traceability of results;
  • Detailed documentation providing all information necessary on the system and its purpose for authorities to assess its compliance;
  • Clear and adequate information to the user;
  • Appropriate human oversight measures to minimise risk;
  • High level of robustness, security and accuracy.

In particular, all remote biometric identification systems are considered high risk and subject to strict requirements. Their live use in publicly accessible spaces for law enforcement purposes is prohibited in principle. Narrow exceptions are strictly defined and regulated (such as where strictly necessary to search for a missing child, to prevent a specific and imminent terrorist threat or to detect, locate, identify or prosecute a perpetrator or suspect of a serious criminal offence).

*This last paragraph here I left in because it’s indicative of the many exceptions and caveats in the bill. 

There’s a lot of ambiguity there, and in the bill itself, but don’t mistake this for the obtuse state the GDPR started out in. The AIA is yet another measured and determined step forward in this process. We are now entering a multi-year period of clearly defining the standards (as a benchmark, GDPR was about 4 years). This could be an opportunity for companies to defang the regulation. They carry huge influence by employing most (or all) of the few people in the world qualified to shape these kinds of technical standards. But, again Europe is ready. 

Rooting out bad faith 

The AIA has some interesting distinctions that prepare for bad faith standards building. It comes in a reasonable statement about not superseding any suitable pre-existing regulation that already aligns, or can be updated to, with the guidelines of the AIA. The catch is that if the European Commission deems those standards are not suitable, it will then amend them in the law. What this effectively does is it takes over the role of determining the standard away from where the organizations have any influence. This provision forces those involved to hedge against any standards they create being deemed insufficient. They will have to show at least some of their cards to collaborate on something that will pass, as opposed to just shrinking it as much as possible.

I have also heard from people that the bill doesn’t go far enough. Again, there is a lot of room to define these standards precisely. If you read my post about the seven principles from the HLEG AI, you’ll remember that accountability is a really tricky thing with AI. Unlike a piece of machinery, or even traditional software, it is much harder to separate out the pieces as the software updates dynamically based on how it is used and interacted with. Much of the AI ecosystem is based on cloud platforms. A company offering proctoring services for taking an exam online may have a feature that tracks faces to check for cheating. That feature is likely built on some pre-trained facial recognition and eye tracking models from Amazon or Microsoft, and then customized for taking exam proctoring. It’s hard to draw a distinct line between where the cloud product ends and the customized product begins, nor how the customization and use might affect the integrity of the original model. 

You can be sure the cloud providers are needing to reassure their clients that they know what to expect from the AIA and how to stay in compliance. Only, this time they won’t be able to steamroll their way through the regulation process to get what they want. Determining what the right distribution of accountability is will require more than just the technical expertise they have that usually allows them to dominate the conversation. 

Negotiating a new social contract

We are overdue for a rebalancing of tech’s benefits and harms. We’ve seen for decades a decoupling of productivity and wages, and as software becomes more powerful it is also being used to further repress human rights and agency. These harms are not necessarily explicit or intentional, rather they amplify inequities we have taken for granted in society. Just as in the lesson of Ford’s 8-hour work week, fixing the technological problems will mean addressing underlying societal problems.

A common call in the AI ethics discussion has been to include more stakeholders at the table and to involve the humanities in determining the answers to these questions of accountability and limits. This negotiation of our social contract shouldn’t be limited to high level groups or a few offices in Silicon Valley. People’s values are too subjective. The negotiations will need to happen across society, and include workers and consumers who are most directly impacted by these yet-unwritten rules. 

Many of these standards bodies are open for comment and even participation (here’s where you can apply to the International Standards Organization’s group on AI for Canada). We should go much further than the bare minimum and look to rebalance  our society’s problems through the way we collaborate with machines.

The window of willingness is here. A major government has made its move. We can expect the US will be taking less of a backseat than it has in the past as it looks to rally support in the face of China, the major exception to the Brussels Effect. Civic action is swelling (see Part II, coming soon) and will continue to mount pressure on businesses who may need to be dragged along at first. But businesses will need to also come along eventually, and I think they, too, will find they benefit from this opportunity to reshape the way we collaborate with machines (see Part III, also coming soon).

Stay in the loop.

Subscribe to our newsletter for a weekly update on the latest podcast, news, events, and jobs postings.