AI Regulation is Essential, But Europe’s Innovation Capacity is Potentially Being Stifled
The EU recently took a major step towards passing one of the world’s first laws governing artificial intelligence, after its European Parliament approved a draft legislation with rules on facial recognition technology, drones, deepfakes, bots, automated medical diagnoses and more.
“AI raises a lot of questions socially, ethically, economically. But now is not the time to hit any ‘pause button’. On the contrary, it is about acting fast and taking responsibility,” said Thierry Breton, the European commissioner for the internal market.
Unlike the proposed legislation coming out of the UK, there was little mention of what impact the EU AI Act will have on innovation. Which isn’t to say the EU is not committed to innovation. Ever since the beginning of its “ethical AI” project, there has been a belief that regulation will somehow lead to greater innovation.
“I am personally convinced that ethical guidelines will be enablers of innovation for artificial intelligence,” said then-Digital Commissioner Mariya Gabriel in 2019.
Why this will be the case isn’t elaborated on, nor has it been in the four years since Gabriel made these comments.
A Complex Relationship
While few doubt the need for better regulation around AI, especially in the US, inferring a direct causal link – as Gabriel and many experts have done – between regulation and innovation is problematic. Looking at the evidence, the US is the absolute global leader when it comes to generative AI, with China a distant second and Europe lagging even further behind. More generally, the US remains the global leader in AI, although China is making progress at reducing this gap, with Europe again lagging far behind. Thus comparing innovation-high, regulation-low US to regulation-high, innovation-low Europe could lead someone to conclude the opposite point: that higher regulation leads to lower innovation, and vice versa.
Of course, attributing such direct causality would be a gross oversimplification, despite the strong correlation between lower regulation and higher innovation, and vice versa (at least with AI, other sectors must be taken on a sector-by-sector basis. Correlation has been observed where increased regulation enables innovation in FinTech). And besides, regulation is an absolute necessity. Technology, particularly AI, is accelerating at an exponential rate and the consequences may be severe if we don’t get the regulation question right. But managing the risks and rewards of emerging technologies like AI is a balancing act.
While some believe regulation should be avoided in case it hinders innovation, and certainly too much regulation can stifle innovation (think here of wind energy projects, where only 19-21% of planned projects are under construction, with most wind farms remaining stuck in the permitting process, according to Deloitte) – proper regulation is essential. Put simply, too little regulation may lead to extremely negative consequences that far outweigh the consequences of too much regulation.
The consequences of too little innovation include current issues around privacy (i.e. facial recognition technology, mass surveillance) and self-contained tragic accidents (i.e. fatalities resulting from self-driving autonomous vehicle accidents), to the potentially apocalyptic: Turing award winner Judea Pearl believes “we’re going to have robots with free will, absolutely,” while computer scientist Steve Omohundro argues autonomous systems are likely to behave in anti-social ways, and DeepMind co-founder Shane Legg expects that “human extinction will probably occur.”
But while regulation is essential, a priori appeals to increased regulation automatically leading to increased innovation are unhelpful, considering no empirical evidence exists to support this assertion. Equally, appeals to less regulation – or no regulation – leading to increased innovation are dangerous and potentially catastrophic. Beyond apocalyptic futures, at the most basic level regulation provides the necessary preconditions to enable market access for innovation, provides firms considering major investment with certainty, and can be used to articulate ambitious visions for development. Regulation also establishes the conditions and context of innovation, as regards labour, capital and competition, etc.
The challenge is producing a regulatory framework that maximises the upsides of innovation while minimising the potential downsides and protects the rights of individuals and societies.
Regulating the Pace of Change
New technology always brings unexpected consequences and the potential to be used in unanticipated ways, some of which could be good and some could be harmful, which is where regulators step in. Most key technologies, like cars, aviation, healthcare and finance are heavily regulated, but the difficulty lies in the fact that technology, particularly AI, moves at a much faster pace than regulators do and finding the right way to regulate this technology becomes difficult.
Regulators traditionally aimed to mitigate social, economic, safety and environmental risks for consumers while ensuring fair markets, but sweeping changes in technology are altering the regulatory environment. According to Deloitte, regulatory agencies are increasingly being called upon to not only protect consumers from the negative effects of technology, but also to help catalyse innovation, effectively protecting consumers and citizens through regulation while ensuring regulations don’t discourage innovation and growth.
To ensure consumer safety doesn’t come at the expense of innovation, regulators are deploying tools like sandboxes, which are safe testing environments in which innovators can see their inventions play out with certain regulatory leeway and appropriate consumer protections. According to Deloitte, firms in the UK’s FinTech regulatory sandbox saw a 15% increase in capital raised, as the sandbox reduced regulatory uncertainty, helped firms bake in appropriate safeguards, and reduced expenditures on regulatory consulting. Similarly, Singapore’s regulatory sandbox which aims to mitigate risks around autonomous vehicles (AV) has been attracting investment to the city-nation since 2015.
Apart from clarifying risks to encourage investment, Deloitte also recommends regulatory agencies incentivise innovation, streamline regulation, and set standards that promote industry leading practices. Effective regulation doesn’t necessarily require years of drafting regulations, as soft law instruments – such as guidelines and standards – can rapidly adapt to new business models, while a customer experience lens could improve the relationship between businesses and regulators. Digital technologies can streamline the regulatory process while regulators can proactively engage with regulated entities to develop standards and guidelines that protect consumers from risks without putting an unnecessary burden on regulated entities.
Moving in Different Directions
The UK’s Department for Science, Innovation and Technology recently published a whitepaper focusing on taking ‘a pro-innovation approach to AI regulation,’ with a proposed regulatory framework made up of the following tenets:
“Enabling rather than stifling responsible innovation; avoiding unnecessary or disproportionate burdens for businesses and regulators; addressing real risks and fostering public trust in AI in order to promote and encourage its uptake; adapting quickly and effectively to keep pace with emergent opportunities and risks as AI technologies evolve; making it easy for actors in the AI life cycle, including businesses using AI, to know what the rules are, who they apply to, who enforces them, and how to comply with them; encouraging government, regulators, and industry to work together to facilitate AI innovation, build trust and ensure that the voice of the public is heard and considered.”
It’s a far cry from the EU’s approach to AI regulation, which almost pits innovation and regulation against each other in its quest to become world-leader in so-called “trustworthy AI.” Although the long-proposed EU AI Act was once hailed by Deloitte as a “a new regulatory paradigm for innovation,” so far there has been very little innovation – but multitudes of legislation and proposed legislation, while China and the US continue to pull further and further away in what has become a two-horse AI race.
Former Digital Commissioner Mariya Gabriel has stated her conviction that “ethical guidelines will be enablers of innovation for artificial intelligence,” but there is no supporting evidence to justify this statement yet, unfortunately. In fact, EU guidelines are hampering innovation in at least one area.
Potentially Stifling Innovation
With the introduction of GDPR in 2018, the EU has some of the strictest rules for the use of personal data in the world. But the more information a deep learning system is given and has access too, the better and ‘smarter’ it becomes. European tech firms say that a lack of access to data due to GDPR is putting them at a disadvantage to global competitors, according to Politico.
Loubna Bouarfa, CEO and founder of OKRA.AI and former member of the European Union High-Level Expert Group on AI, has said that data barriers between European countries make it “very hard” for entrepreneurs to fully exploit the potential of AI technology. “Europe is falling behind on AI, and we do really need to act quickly.”
Ulrike Franke, a senior policy fellow at the European Council on Foreign Relations, has stated that Europe will only be able to push its AI standards globally if its ethical ambitions are accompanied by efforts to boost a top-notch AI industry across the EU. “It’s absurd to believe you can become a world leader in ethical AI before becoming a world leader in AI first,” she is quoted as saying to Politico.
Daniel Castro, VP of the think tank ITIF – which includes board members from Amazon, Apple, Google and Micosoft – dismisses the EU’s approach to AI as “naïve” and thinks the EU will continue losing out to US and China because customers don’t care about an ethics-first approach in itself without also having a superior product.
“It’s like any other race: you can have the more ethical race car driver, but if his car is not faster, you are going to lose,” Castro told Politico. “This is still a market-based economy…you have to create something of more value than your competitors. The European Commission itself has not provided any evidence that customers are actually willing to pay for (what the EU is proposing).” A survey from the Center for Data Innovation found consumers are not willing to pay a premium for products labelled “ethical by design.”
People and Bias
However, it’s important to re-emphasise that although EU regulation has not fostered innovation so far, this does not mean regulation and innovation are incompatible. Indeed, it’s equally important to re-emphasise that regulation is vital.
For example, because algorithms “learn” from real-world data, they are vulnerable to incorporating unconscious biases against minorities and other vulnerable groups. Politico notes that Amazon scrapped an AI-powered recruiting tool that discriminated against women, ProPublica revealed that predictive policing software used by U.S. authorities shows bias against black people, and Google issued an apology after one of its machine-learning applications labelled being Jewish or being gay as negative. Researchers at the Georgia Institute of Technology also found that self-driving cars are more likely to drive into people of colour.
Developments like this are why the EU wants to promote “trustworthy” AI, which respects European values and is engineered in a way that prevents it from causing intentional or unintentional harm. According to Virginia Dignum, professor of social and ethical artificial intelligence at Sweden’s Umeå University, it’s about what’s best for consumers.
“In a sense, ‘ethics’ isn’t the goal,” she noted. “We want (AI) to be ethical and socially responsible because we want AI systems to be trusted, and useful for people.”
So far, the EU has not got the balancing act between regulation and innovation right, but there’s no reason to throw the baby out with the bathwater when it comes to the regulation/innovation question – specifically how to create regulation that fosters innovation. Dignum also raises a key point: people.
Forbes recently “quoted” ChatGPT as saying, “I do not have personal beliefs or feelings, including racism. I was programmed to provide responses based on the input I received and the knowledge and language patterns I have been trained on.” Meaning it’s people that imbue AI with their own unconscious bias, and that’s a serious problem.
But it also goes the opposite way. Diversifying the researchers creating AI systems and the datasets that algorithms use to learn can help teach better habits and ensure more equitable outcomes with AI and machine learning systems. At the end of the day, it’s people that will determine the success of AI going forward, be it at the technical level, the regulatory level, and the leadership level.
Leadership, Mindset and Organisational Structure
Project Maven was a US Pentagon program designed to deliver AI technologies to an active combat theatre within six months from when the project received funding. Although somewhat controversial (mainly due to the later involvement of Google), the project was highly successful overall and offers key learnings around AI and digital/technology leadership.
Namely, project success was enabled by its organisational structure: a small, operationally focused, cross-functional team empowered to develop external partnerships, leverage existing infrastructure and platforms, and engage with user communities iteratively during development. The six founding members of Project Maven, though they were assigned to run an AI project, were not experts in AI or even computer science. Rather, their first task was building external partnerships and engaging top talent in the AI field, which the Department is usually unable to attract on a contracting/project basis.
Not only did the Project Maven team/leadership build partnerships with the commercial tech sector, but they modelled Project Maven after project management techniques from that sector, with product prototypes and underlying infrastructure developed iteratively and tested by the user community on an ongoing basis.
Project Maven throws up a host of ethical challenges around AI-powered weapons, but we should not ignore its learnings: it was not the technology itself, but the organisational and leadership structure that made the project a success. Recall the words of Colonel John Boyd, the US Air Force pilot and military strategist, who would routinely bark: “People, ideas, machines – in that order!” Boyd believed project success with technology came from the intersection between people and technology, and the ideas of those people, not from the technology itself. The learnings from Project Maven reinforce that.
Tony Moroney, programme director for IMI’s Digital Leadership diploma, Senior Executive Experience and AI for C-Suite programmes, believes the biggest challenge to be overcome when it comes to technology and digital is their mindset. “When people hear digital transformation they tend to focus on the digital side rather than the transformation side, when really it’s just using digital tools to deliver this transformation and provide a better experience for customers. But it’s vital to look at digital transformation as a strategic imperative, and not just as a technology project.”
Likewise better AI regulation is imperative, and overcoming mindset around regulation is equally vital. Firstly, overcoming the EU mindset (increased regulation automatically leads to increased innovation) and overcoming, loosely, the US mindset (less regulation leads to increased innovation/regulation impedes innovation). It should be noted the Biden administration does not necessarily embody this mindset, as it has shown commitment to promoting “trustworthy AI” by working more closely with the EU and proposing domestic legislation around better AI regulation.
Buy-in and Alignment
But “showing commitment” to so-called trustworthy AI and proposing legislation isn’t enough, as the situation with the EU proves: too often, politicians and legislators simply don’t understand the technology, and thus cannot understand the impacts and potential consequences.
What’s required is buy-in and alignment; buy-in from those building AI systems and “doing” innovation (rather than talking about innovation), who are based primarily in Silicon Valley and coastal America, who legislators and regulators should be collaborating and communicating with in order to create regulation and policy that ensures AI systems continue to be aligned with humanity’s interests, even if they become smarter than us.
Hypothetically, humans may one day build an AI system with cognitive capacities that far outstrip our own. And its developers and engineers may be building it with the intent of solving scientific problems that have baffled us for decades, curing rare diseases and transforming education, etc. all humanitarian aims which regulation may get it in the way of, these idealistic developers and engineers may feel.
But as the Wall Street Journal recently noted, history is full of powerful entities that caused grave harm in the unchecked pursuit of their goals: logging companies that obliterated rainforests, banks whose complex financial instruments led to a global recession – the rap-sheet of various unregulated worlds is endless. Before we unleash powerful AI on the world, more work needs to be done in the field of AI safety, with the goal of ensuring that these systems pursue their objectives in a way that benefits society and aligns with the interests of their human creators.
Overcoming the challenges of regulation, and getting the innovation/regulation balancing act right, will be key to creating a more transparent and innovative future. Likewise understanding the implications of regulation is key for senior leaders as they push their organisations towards a more innovative but safer future. The demand for a better understanding of these implications has led to IMI to create their AI for C-suite half day programme, featuring immersive workshops providing scenarios where senior leaders will work through the implications and challenges of embedding AI into their businesses.