“The draft framework [for the regulatory sandbox] does not include demonstrating that a new technology can be safely tested. In our view, safety must be the number one priority for testing cutting-edge AI technology,” Campaign for AI Safety (CAS) said, in its submission to the Telecom Regulatory Authority of India’s consultation on creating a regulatory sandbox for telecom companies.
As per CAS, issues such as the black box problem (of developers not understanding how an AI system works) and AI systems’ ability to learn from the data they are fed and make decisions “pose risks such as losing control which can have devastating consequences, and unknowingly breaking laws or infringing on individual rights.”
What is the consultation about? The proposition to have a regulatory sandbox is aimed at allowing telecom companies to innovate and test new products and services in a controlled environment.
What is Campaign for AI Safety? CAS is an Australian unincorporated association of people who are concerned about the dangers of AI.
Why does this matter?
Given how TRAI has been heavily discussing the use of AI technology, especially in curbing spam calls and messages, it is worth giving special consideration to AI during this consultation. The regulatory sandbox consultation also mentions that sandboxes can be used as a means of creating regulations around AI. While CAS’s AI-focused suggestions are helpful, they focused on the overarching development of AI and not just its use in telecom.
Provide funding for companies, CAS says
In its submission, CAS pointed out that the paper does not, “explicitly discuss resourcing or funding the proposed framework.” This, it says, is concerning because, “without adequate resourcing of organisational capacity and capability (e.g. AI or emerging technology expertise), regulators will be exposed to undue influence and reliance on regulatees that have greater technical knowledge in these sandboxes and testbeds.”
Based on our understanding, CAS is implying that if TRAI does not provide funds for the startups participating in its sandbox, companies would have to look elsewhere. They would eventually turn to bigger players in the AI space and, in turn, this would make such players even more influential. It also expresses concern about close collaboration with the AI industry. It says that this would influence regulators to “give more weight to private preferences at the expense of the broader public interest.”
Article continues below ⬇, you might also want to read:
- TRAI Seeks Inputs On Regulatory Sandbox Framework For The Digital Communication Sector
- TRAI Recommends Setting Up Regulator For Artificial Intelligence In India
- Airtel And Jio Don’t Want A Regulatory Sandbox: Here’s Why!
CAS’s recommendations on AI:
Prevent the development of powerful AI: “It is a commonly held view that the only way to ensure safety is to delay the development of AI advancing towards human intelligence,” CAS says. It warns that AI technology should only be developed further when it is proven to be safe and suggests that the Indian government should “research the necessary safety standards and protocols before allowing further development.” It says that regulators need to be empowered to detect, investigate and penalize non-compliance with tough penalties. “One measure that will aid the prohibition [of powerful AI] is monitoring the amount of compute used to train foundational models or large language models,” it submits. It argues that companies should be subjected to reporting requirements over a certain threshold of computation.
Impose safety conditions on AI labs: CAS suggests that the regulation for AI labs should be modeled after the licensing requirements in industries such as financial services and healthcare. It says that AI labs should allocate at least 50% of the funds meant for research to the “advancement of alignment, reliability, and explainability until regulators can verify there is not a major risk from their activities.” CAS believes that there should be both external and internal safety evaluation teams. The internal team should focus on vetoing the deployment of unsafe AI, and the external team should certify the safety of new models and incremental advancements on older models. Further, members of the internal safety team should be public officials who would be liable in case an unsafe AI model is deployed. It suggests that forming safety committees and conducting pre-deployment safety evaluations should be mandatory for AI firms.
Mandate the disclosure of training data: AI labs and providers should be required to publicly disclose the training datasets, model characteristics, and results of evaluations. This, CAS says, will be helpful in building public trust and confidence in the process of developing AI.
Redirect funding to AI safety protocols: CAS points out that many countries across the world have invested billions into AI research because of which the AI industry has now become self-sustaining. Therefore, it suggests that “public funding should now be redirected towards research and development of AI safety protocols and techniques.” This should include the development of AI verification and validation techniques, as well as methods to ensure that Artificial General Intelligence systems (AGI is a hypothetical concept of an AI capable of performing all the tasks a human being can perform) do not fail catastrophically or cause unintended harm.
STAY ON TOP OF TECH NEWS: Our daily newsletter with the top story of the day from MediaNama, delivered to your inbox before 9 AM. Click here to sign up today!
