About a year after the European Union became the first entity to propose regulations governing the use of artificial intelligence, Canada is following in its footsteps by proposing its own first-ever private sector regulation of AI.
Similarities between the two regulations reveal the influence, and the pressure, that the EU’s own AI approach is having on other jurisdictions looking to rein in an industry that had been left unchecked.
This June, the Canadian government introduced Bill C-27. If passed, the bill would not only reform federal private-sector privacy law but also regulate artificial intelligence under a new Artificial Intelligence and Data Act (AIDA).
To be sure, the Canadian and the European Union bills are still both in draft mode. But if the European Union passes its bill, it will create a pressure for other countries to pass their own—starting with Canada.
“With the EU AI regulation, we’re going to see a greater spotlight being shone on [AI-related] harms, because if it passes, which I think it will, we’re going to start seeing more regulatory action, more enforcement and action being taken against people in the AI industry,” said Justin P’ng, an associate in Canada’s law firm Fasken’s privacy and cybersecurity group. “There’s going to be that greater spotlight on it. And I think that’s going to put more pressure on other jurisdictions to act, to regulate something that has not been regulated historically or recently.”
P’ng noted there is potential for a Brussels Effect—essentially EU’s influence and power shaping and regulating global markets, similar to what happened with the EU’s General Data Protection Regulation (GDPR).
“We’ll see that in respect to the EU AI regulation because they are the first ones in the space, and because other jurisdictions, like with the GDPR, tend to learn from their initial experience,” he said. “And because the EU generally, as a lawmaking jurisdiction, they do exercise significant influence over how global companies operate. I think you’ll see a pretty similar effect take place where other jurisdictions will feel compelled to also respond to a EU regulation, kind of like what Canada is doing with the Canadian AI Act.”
The Canadian bill follows a similar harm-based approach to the EU’s Artificial Intelligence Act, with the AIDA focusing on mitigating the risks of harm and bias in the use of “high-impact” AI systems.
This approach makes sense given that the industry is growing too quickly to specify which types of AI are to be regulated, said Marijn Storm, an associate at Morrison & Foerster in Brussels. By the time a regulation would pass, those systems are likely to already be outdated.
However, there are some differences between the laws.
The EU’s Artificial Intelligence Act categorized AI systems in four different risk categories: unacceptable risk, high risk, limited risk or minimal risk. Most of the compliance requirements under that bill apply to those considered “high risk,” Storm said.
In comparison, “the Canadian system somewhat kicks the can down the road,” he added. “They say look, we cover these obvious AI techniques like machine learning, but we can also cover other techniques that generate content or make decisions, so that sort of creates this big loophole.”
“The Canadian AI act focuses more on high-impact systems,” P’ng added. “That’s the term it uses.”
For now, Canada hasn’t defined what “high-impact” means and has deferred the definition to the regulations, which if the bill passes, will be forthcoming. But, P’ng said we are likely to see more similarities with the EU in the types of activities regulated.
“At the end of the day in terms of what the practical effect of that is, what the operational impacts of that is, I don’t think that there’s going to be a huge difference in terms of the activities that these two seek to regulate and the kinds of outcomes that can result in terms of the regulation and the design, development and production of AI systems,” he said.
For now, P’ng said it is best for companies with AI systems to assume that they are covered under the regulation, as it comes with a hefty enforcement regime for noncompliant companies.
“The scope of regulated activities is fairly broad and it impacts the entire supply chain of AI, not just making it available, not just acting as a service provider of AI systems but also designing or developing it,” P’ng said. “So if you are within that AI supply chain, if you’re the one actually putting together an AI system, that is a regulated activity, and so it really does cover, at least in theory, everybody in the AI industry writ large.”
To be sure, penalties under AIDA can go up to $25 million or 5% of an entity’s gross global revenues in the preceding financial year, as well as a term of imprisonment of up to five years for those who developed the AI system, according to the bill.
“I think the bottom-line picture is that they’re quite significant,” P’ng said. “And I think it should give people in the industry some pause in terms of, how badly do they want to comply with the AI act? I think the answer is quite badly.”