wordpress blog stats
Connect with us

Hi, what are you looking for?

Addressing the AI question: Key Highlights from Campaign for AI safety’s submission to TRAI’s regulatory sandbox consultation

Campaign for AI Safety, an Australian unincorporated association of people who are concerned about the dangers of AI, raised issues related to funding and the prominence of bigger players in the AI space.

“The draft framework [for the regulatory sandbox] does not include demonstrating that a new technology can be safely tested. In our view, safety must be the number one priority for testing cutting-edge AI technology,” Campaign for AI Safety (CAS) said, in its submission to the Telecom Regulatory Authority of India’s consultation on creating a regulatory sandbox for telecom companies.

As per CAS, issues such as the black box problem (of developers not understanding how an AI system works) and AI systems’ ability to learn from the data they are fed and make decisions “pose risks such as losing control which can have devastating consequences, and unknowingly breaking laws or infringing on individual rights.”

What is the consultation about? The proposition to have a regulatory sandbox is aimed at allowing telecom companies to innovate and test new products and services in a controlled environment.

What is Campaign for AI Safety? CAS is an Australian unincorporated association of people who are concerned about the dangers of AI.

Why does this matter?

Given how TRAI has been heavily discussing the use of AI technology, especially in curbing spam calls and messages, it is worth giving special consideration to AI during this consultation. The regulatory sandbox consultation also mentions that sandboxes can be used as a means of creating regulations around AI. While CAS’s AI-focused suggestions are helpful, they focused on the overarching development of AI and not just its use in telecom. 

Provide funding for companies, CAS says

In its submission, CAS pointed out that the paper does not, “explicitly discuss resourcing or funding the proposed framework.” This, it says, is concerning because, “without adequate resourcing of organisational capacity and capability (e.g. AI or emerging technology expertise), regulators will be exposed to undue influence and reliance on regulatees that have greater technical knowledge in these sandboxes and testbeds.” 

Based on our understanding, CAS is implying that if TRAI does not provide funds for the startups participating in its sandbox, companies would have to look elsewhere. They would eventually turn to bigger players in the AI space and, in turn, this would make such players even more influential. It also expresses concern about close collaboration with the AI industry. It says that this would influence regulators to “give more weight to private preferences at the expense of the broader public interest.”


Article continues below ⬇, you might also want to read:


CAS’s recommendations on AI:

Prevent the development of powerful AI: “It is a commonly held view that the only way to ensure safety is to delay the development of AI advancing towards human intelligence,” CAS says. It warns that AI technology should only be developed further when it is proven to be safe and suggests that the Indian government should “research the necessary safety standards and protocols before allowing further development.” It says that regulators need to be empowered to detect, investigate and penalize non-compliance with tough penalties. “One measure that will aid the prohibition [of powerful AI] is monitoring the amount of compute used to train foundational models or large language models,” it submits. It argues that companies should be subjected to reporting requirements over a certain threshold of computation. 

Impose safety conditions on AI labs: CAS suggests that the regulation for AI labs should be modeled after the licensing requirements in industries such as financial services and healthcare. It says that AI labs should allocate at least 50% of the funds meant for research to the “advancement of alignment, reliability, and explainability until regulators can verify there is not a major risk from their activities.” CAS believes that there should be both external and internal safety evaluation teams. The internal team should focus on vetoing the deployment of unsafe AI, and the external team should certify the safety of new models and incremental advancements on older models. Further, members of the internal safety team should be public officials who would be liable in case an unsafe AI model is deployed. It suggests that forming safety committees and conducting pre-deployment safety evaluations should be mandatory for AI firms. 

Mandate the disclosure of training data: AI labs and providers should be required to publicly disclose the training datasets, model characteristics, and results of evaluations. This, CAS says, will be helpful in building public trust and confidence in the process of developing AI. 

Redirect funding to AI safety protocols: CAS points out that many countries across the world have invested billions into AI research because of which the AI industry has now become self-sustaining. Therefore, it suggests that “public funding should now be redirected towards research and development of AI safety protocols and techniques.” This should include the development of AI verification and validation techniques, as well as methods to ensure that Artificial General Intelligence systems (AGI is a hypothetical concept of an AI capable of performing all the tasks a human being can perform) do not fail catastrophically or cause unintended harm.


STAY ON TOP OF TECH NEWS: Our daily newsletter with the top story of the day from MediaNama, delivered to your inbox before 9 AM. Click here to sign up today!


 

Written By

MediaNama’s mission is to help build a digital ecosystem which is open, fair, global and competitive.

Views

News

Factors like Indus not charging developers any commission for in-app payments and antitrust orders issued by India's competition regulator against Google could contribute to...

News

Is open-sourcing of AI, and the use cases that come with it, a good starting point to discuss the responsibility and liability of AI?...

News

RBI Deputy Governor Rabi Shankar called for self-regulation in the fintech sector, but here's why we disagree with his stance.

News

Both the IT Minister and the IT Minister of State have chosen to avoid the actual concerns raised, and have instead defended against lesser...

News

The Central Board of Film Certification found power outside the Cinematograph Act and came to be known as the Censor Board. Are OTT self-regulating...

You May Also Like

News

Google has released a Google Travel Trends Report which states that branded budget hotel search queries grew 179% year over year (YOY) in India, in...

Advert

135 job openings in over 60 companies are listed at our free Digital and Mobile Job Board: If you’re looking for a job, or...

News

By Aroon Deep and Aditya Chunduru You’re reading it here first: Twitter has complied with government requests to censor 52 tweets that mostly criticised...

News

Rajesh Kumar* doesn’t have many enemies in life. But, Uber, for which he drives a cab everyday, is starting to look like one, he...

MediaNama is the premier source of information and analysis on Technology Policy in India. More about MediaNama, and contact information, here.

© 2008-2021 Mixed Bag Media Pvt. Ltd. Developed By PixelVJ

Subscribe to our daily newsletter
Name:*
Your email address:*
*
Please enter all required fields Click to hide
Correct invalid entries Click to hide

© 2008-2021 Mixed Bag Media Pvt. Ltd. Developed By PixelVJ