More than a year after the European Union became the first entity to propose regulations governing the use of artificial intelligence, the U.K. has now released a policy paper revealing what its own regulatory approach to AI could look like in a post-Brexit world.
While both the U.K. and the EU ultimately aim to protect similar core principles, such as transparency and safety, the U.K. offers a unique approach for other regulators looking to draft their own laws—one that aims to avoid overregulation and remain future-proof as new technologies continue to emerge.
“Perhaps not surprisingly in a post-Brexit world with a very pro-Brexit conservative party government in the U.K. at the moment, what we’re seeing is a real divergence of approach between the EU, which has gone for a kind of monolithic, principle-based regulation, which will apply to all AI across all sectors in a very consistent way,” said Gareth Stokes, a partner and part of the leadership team for DLA Piper’s global AI practice group.
While the EU’s approach was to cast a wide net on which AI systems were included in its proposed regulation, the U.K. proposed a sector-based perspective.
“It looks as though the U.K. approach is going to be not to have a single monolithic regulation, but rather to grant additional powers to sector-based regulators to regulate the uses of AI that are likely to be deployed by businesses operating in their sectors,” Stokes said.
For example, the U.K.’s Medicines and Healthcare Products Regulatory Agency or the Financial Conduct Authority could be tasked with regulating how AI is used in their sectors.
“Rather than imposing a set standard as to how to achieve these principles, it’s a lot in the context of that sector to drive what that standard means,” Simon Elliott, a partner and head of data privacy and cybersecurity practice at Dentons, noted. Elliott believes this will be a welcomed approach as it allows for a better understanding of the specific risks that come with each industry sector.
While the EU attempted to define the AI technology it aimed to regulate, the U.K. is focused on regulating the outcome itself, Elliott noted.
“The approach that the EU is taking is actually trying to give some sort of technological parameters to what’s been regulated, whereas the U.K. has focused very much on some of the characteristics of the systems, this adaptability and the autonomy,” Elliott said.
The U.K.’s focus on AI systems’ autonomy and adaptability stems from the belief that the more autonomous or adaptable an AI system is, the more likely it is to cause harm.
“What we mean by adaptable is something that exhibits unpredictable characteristics … because of the level of complexity of that system and the fact that it has learned from data for itself,” Stokes said.
For example, a supermarket automatic checkout might be autonomous, but it is “behaving entirely predictably,” Stokes noted, while an AI art generator is unpredictable—or adaptable—but is certainly not autonomous.
Self-driving cars and similar AI systems considered both autonomous and adaptable are “what the U.K. policy paper says is the major area of mischief to be controlled here,” Stokes said.
What’s more, both Stokes and Elliott noted that by not focusing on a technology-based definition, the U.K.’s technology-agnostic approach could help cover future forms of AI systems that haven’t yet been created.
To be sure, while a sector-based approach allows for more flexibility, it could bring its own set of challenges as it could lead to disparities in the approaches that different regulators could take on similar issues.
However, “in order to avoid that problem popping up, the policy paper talks about a series of core principles,” Stokes said, which can be thought of as common standards that regulators should be addressing in a consistent manner.
“And, if it works out, that hopefully will ensure that there is a sort of commonality of approach across those different sectors,” he added.
While the EU and U.K. approaches differ, these core principles—such as transparency, security, safety, fairness—remain fairly similar across both entities.
“It’s likely that we’re going to end up in a place where there will be a set of common practices developed, which ensure a kind of a broad-brush compliance with both regimes and there’ll be a sort of common denominators type of approach across both regimes that businesses are going to have to live with in reality,” Stokes said.
While it’s difficult to predict whether other countries will follow the U.K.’s approach to defining—and regulating—AI, what’s sure is that the British government offered an alternative approach that aims to be more future-proof and flexible for businesses.
For now, organizations and legal professionals alike will have to wait until a forthcoming white paper, expected to be published later this year, provides more clarity about the U.K.’s regulatory plan.