“If you have countries taking very different regulatory approaches [to artificial intelligence] that make it so that these models can’t be deployed to users, to businesses within their borders,” Michael Sellitto, the head of global affairs at the artificial intelligence (AI) firm Anthropic, pointed out during Carnegie India’s Global Technology Summit that took place in New Delhi between December 4-6. He explained that fragmented regulatory approaches to artificial intelligence (AI) regulation could eventually end up having serious implications on economic trade, geopolitics, and geo-security.
Google executive Markham Cho Erickson echoed the same concern. “It’s fairly obvious that this [AI] is a trans-border medium and regulations around safety that are built in one jurisdiction are gonna affect the safety for people in another jurisdiction,” he said. Erickson added that governments should focus on high-level principles for AI regulation and “get a basic agreement on what these technologies should do and things that we should be careful of,” essentially implying that there should international agreement on the usage and risks of AI.
Why regulatory fragmentation matters
“I think one of the consequences of regulatory fragmentation is you slow down that pace of diffusion and you slow down the ability of taking great ideas from countries like India and spreading that elsewhere in the world,” Microsoft executive Marcus Bartley Johns said at the summit. He further mentioned that fragmentation impacts smaller players who would struggle to plan ahead and attract capital because of it. “If you can’t convince your investors that you’re going to have some regulatory certainty one to two years down the line, that’s going to have an impact on the ability to grow over time,” he explained.
Johns mentioned that multiple countries across the world are currently discussing how AI can exacerbate cyber foreign influence operations. “If countries don’t get together and talk about some of the partnerships that are going to be necessary to combat a problem like that, we’re going to be in a worse place,” Johns said.
How to regulate foundational models
Speaking about Anthropic’s perspective on how it regulates its foundational models, Sellitto said that the company takes a flexible approach. In areas where there are low risks involved, the company deploys its models, and in areas where there might be higher risks, it tests for safety and makes its models safer before it releases them. Sellitto suggests that this risk-based regulatory approach should initially be a form of self-regulation but that governments should ultimately put it into actual regulatory requirements for more powerful systems.
However, a risk-based approach cannot be applied to all foundational models, especially not to more cutting-edge foundational AI models which experts refer to as ‘frontier models’. Johanna Weaver, the founding director of Tech Policy Design Centre at the Australian National University pointed out these challenges by explaining that risk is an inherent calculation of probability. “Advanced frontier models are operating in a way that it is impossible to predict. Therefore, the risk-based model at that level, I think, is fundamentally flawed,” she said.
Weaver also said that wherever there are issues with the risk-based model, the gaps could be filled by existing legal frameworks. “I think the most important thing when we’re talking about the regulation of AI is, first of all, to acknowledge right up front that we’re not operating in a vacuum, that the existing legal frameworks that operate in our legal systems already apply to artificial intelligence,” she explained.
What discussions on AI regulation can borrow from privacy regulation
“There are thoughts around that in the G7 code of conduct, for example, that to really dig into that differentiation of roles between developers and deployers and users and who’s accountable for what,” Claybaugh said. She explained that clarity on who is accountable for what would provide businesses comfort and certainty when getting into the AI space. While Claybaugh believes there is a lot that can be borrowed from privacy regulations, she suggests that efforts need to be directed toward identifying the new aspects that need to be addressed in the AI space.
Although a lot of her fellow speakers spoke about issues associated with fragmentation of regulation, Claybaugh pointed out that a one-size-fits-all approach to AI regulation is not going to work. As such, she suggested that governments would have to focus on finding the gaps in pre-existing regulations around privacy, copyright, bias, etc, and seeing how they can be adapted to cover AI.
Differences between the AI regulatory approach of global north and south
“We are seeing different approaches to regulation around the world. Whether that’s in the US, which is a combination of voluntary commitments and safety and testing reporting requirements through the executive order to in the EU, which is much more rules-based,” Amlan Mohanty a scholar at Carnegie India said. In contrast, according to the discussion at the summit featuring India’s information technology (IT) Minister, India has a “hybrid framework, which is a combination of market space, interventions, and rights-based regulation,” Mohanty pointed out.
Weaver also flagged this marked difference of opinion between the global north and global south. “For the global south, much of the conversation in this space has been about the opportunity of artificial intelligence. That’s the awakening that has happened in the global south in the last year. For the global north, most of the conversation has been around the risks of artificial intelligence,” she said.
“The global south is thinking about regulation to enable benefit,” Rahul Matthan a partner at Trilegal law firm said adding to the distinctions between global north and south. He argued that India’s data protection law and discussion around techno legal infrastructure in the country have been focused on using data for empowerment. He said that if India doesn’t create regulatory frameworks that allow it to take advantage of AI, then it’s doing itself a disservice.
STAY ON TOP OF TECH NEWS: Our daily newsletter with the top story of the day from MediaNama, delivered to your inbox before 9 AM. Click here to sign up today!