wordpress blog stats
Connect with us

Hi, what are you looking for?

Big Tech Companies’ Representatives Debate the varying approaches to AI regulation at Carnegie India Summit

Representatives from Meta, Anthropic, Google and others debate why diverging regulatory approaches could adversely affect the progress of AI.

“If you have countries taking very different regulatory approaches [to artificial intelligence] that make it so that these models can’t be deployed to users, to businesses within their borders,” Michael Sellitto, the head of global affairs at the artificial intelligence (AI) firm Anthropic, pointed out during Carnegie India’s Global Technology Summit that took place in New Delhi between December 4-6. He explained that fragmented regulatory approaches to artificial intelligence (AI) regulation could eventually end up having serious implications on economic trade, geopolitics, and geo-security.

Google executive Markham Cho Erickson echoed the same concern. “It’s fairly obvious that this [AI] is a trans-border medium and regulations around safety that are built in one jurisdiction are gonna affect the safety for people in another jurisdiction,” he said. Erickson added that governments should focus on high-level principles for AI regulation and “get a basic agreement on what these technologies should do and things that we should be careful of,” essentially implying that there should international agreement on the usage and risks of AI.

Why regulatory fragmentation matters

“I think one of the consequences of regulatory fragmentation is you slow down that pace of diffusion and you slow down the ability of taking great ideas from countries like India and spreading that elsewhere in the world,” Microsoft executive Marcus Bartley Johns said at the summit. He further mentioned that fragmentation impacts smaller players who would struggle to plan ahead and attract capital because of it. “If you can’t convince your investors that you’re going to have some regulatory certainty one to two years down the line, that’s going to have an impact on the ability to grow over time,” he explained.

Johns mentioned that multiple countries across the world are currently discussing how AI can exacerbate cyber foreign influence operations. “If countries don’t get together and talk about some of the partnerships that are going to be necessary to combat a problem like that, we’re going to be in a worse place,” Johns said.

How to regulate foundational models

Speaking about Anthropic’s perspective on how it regulates its foundational models, Sellitto said that the company takes a flexible approach. In areas where there are low risks involved, the company deploys its models, and in areas where there might be higher risks, it tests for safety and makes its models safer before it releases them. Sellitto suggests that this risk-based regulatory approach should initially be a form of self-regulation but that governments should ultimately put it into actual regulatory requirements for more powerful systems.

However, a risk-based approach cannot be applied to all foundational models, especially not to more cutting-edge foundational AI models which experts refer to as ‘frontier models’. Johanna Weaver, the founding director of Tech Policy Design Centre at the Australian National University pointed out these challenges by explaining that risk is an inherent calculation of probability. “Advanced frontier models are operating in a way that it is impossible to predict. Therefore, the risk-based model at that level, I think, is fundamentally flawed,” she said.

Weaver also said that wherever there are issues with the risk-based model, the gaps could be filled by existing legal frameworks. “I think the most important thing when we’re talking about the regulation of AI is, first of all, to acknowledge right up front that we’re not operating in a vacuum, that the existing legal frameworks that operate in our legal systems already apply to artificial intelligence,” she explained.

What discussions on AI regulation can borrow from privacy regulation

“There’s a focus in privacy regulation on differentiation in roles. You see the processor and the controller distinction [in EU’s General Data Protection Regulation], or in India, the data fiduciary/processor distinction. I think that is a really key element that we have to keep in mind as we’re talking about AI regulation,” Privacy policy director at Meta Melinda Claybaugh said comparing AI and privacy regulations. She pointed out that different players in the AI space need to be accountable for different things, just like they are in under privacy legislations like the Digital Personal Data Protection Act, 2023 (DPDP, 2023) in India.

“There are thoughts around that in the G7 code of conduct, for example, that to really dig into that differentiation of roles between developers and deployers and users and who’s accountable for what,” Claybaugh said. She explained that clarity on who is accountable for what would provide businesses comfort and certainty when getting into the AI space. While Claybaugh believes there is a lot that can be borrowed from privacy regulations, she suggests that efforts need to be directed toward identifying the new aspects that need to be addressed in the AI space.

Although a lot of her fellow speakers spoke about issues associated with fragmentation of regulation, Claybaugh pointed out that a one-size-fits-all approach to AI regulation is not going to work. As such, she suggested that governments would have to focus on finding the gaps in pre-existing regulations around privacy, copyright, bias, etc, and seeing how they can be adapted to cover AI.

Differences between the AI regulatory approach of global north and south

“We are seeing different approaches to regulation around the world. Whether that’s in the US, which is a combination of voluntary commitments and safety and testing reporting requirements through the executive order to in the EU, which is much more rules-based,” Amlan Mohanty a scholar at Carnegie India said. In contrast, according to the discussion at the summit featuring  India’s information technology (IT) Minister, India has a “hybrid framework, which is a combination of market space, interventions, and rights-based regulation,” Mohanty pointed out.

Weaver also flagged this marked difference of opinion between the global north and global south. “For the global south, much of the conversation in this space has been about the opportunity of artificial intelligence. That’s the awakening that has happened in the global south in the last year. For the global north, most of the conversation has been around the risks of artificial intelligence,” she said.

“The global south is thinking about regulation to enable benefit,” Rahul Matthan a partner at Trilegal law firm said adding to the distinctions between global north and south. He argued that India’s data protection law and discussion around techno legal infrastructure in the country have been focused on using data for empowerment. He said that if India doesn’t create regulatory frameworks that allow it to take advantage of AI, then it’s doing itself a disservice.


STAY ON TOP OF TECH NEWS: Our daily newsletter with the top story of the day from MediaNama, delivered to your inbox before 9 AM. Click here to sign up today!


Also read:

Written By

Free Reads

News

"We believe the facts and the law are clearly on our side, and we will ultimately prevail," the company said on the enactment of...

News

Zuckerberg expressed confidence in monetizing AI through methods like ads and paid access to larger models, leveraging Meta's successful history with scaled technologies.

News

The data leakage comes on the same day as the Reserve Bank of India (RBI) restricted Kotak Mahindra Bank from onboarding customers over online/mobile...

MediaNama’s mission is to help build a digital ecosystem which is open, fair, global and competitive.

Views

News

NPCI CEO Dilip Asbe recently said that what is not written in regulations is a no-go for fintech entities. But following this advice could...

News

Notably, Indus Appstore will allow app developers to use third-party billing systems for in-app billing without having to pay any commission to Indus, a...

News

The existing commission-based model, which companies like Uber and Ola have used for a long time and still stick to, has received criticism from...

News

Factors like Indus not charging developers any commission for in-app payments and antitrust orders issued by India's competition regulator against Google could contribute to...

News

Is open-sourcing of AI, and the use cases that come with it, a good starting point to discuss the responsibility and liability of AI?...

You May Also Like

News

Google has released a Google Travel Trends Report which states that branded budget hotel search queries grew 179% year over year (YOY) in India, in...

Advert

135 job openings in over 60 companies are listed at our free Digital and Mobile Job Board: If you’re looking for a job, or...

News

By Aroon Deep and Aditya Chunduru You’re reading it here first: Twitter has complied with government requests to censor 52 tweets that mostly criticised...

News

Rajesh Kumar* doesn’t have many enemies in life. But, Uber, for which he drives a cab everyday, is starting to look like one, he...

MediaNama is the premier source of information and analysis on Technology Policy in India. More about MediaNama, and contact information, here.

© 2008-2021 Mixed Bag Media Pvt. Ltd. Developed By PixelVJ

Subscribe to our daily newsletter
Name:*
Your email address:*
*
Please enter all required fields Click to hide
Correct invalid entries Click to hide

© 2008-2021 Mixed Bag Media Pvt. Ltd. Developed By PixelVJ