On January 10, ChatGPT creator OpenAI updated its usage policies to no longer include an out-and-out prohibition on using its models for military and warfare purposes. While the updated policy doesn’t explicitly disallow military usage, it still says that users must not use their services to harm themselves or others. One of the harms notably listed by the company is developing or using weapons.
Since the updated policy, OpenAI has stated that national security uses of AI align with its mission. “For example, we are already working with DARPA [US’s Defence Advance Research Projects Agency] to spur the creation of new cybersecurity tools to secure open source software that critical infrastructure and industry depend on. It was not clear whether these beneficial use cases would have been allowed under “military” in our previous policies. So the goal with our policy update is to provide clarity and the ability to have these discussions,” the company said, acc. to a TechCrunch report. DARPA announced its collaboration with OpenAI Anthropic, Google, and Microsoft on the creation of cybersecurity systems back in August 2023.
Why it matters:
While OpenAI has explained away the updated policy to include its work on cybersecurity tools with DARPA, the change signals that the company might be softening its stance on the military usage of artificial intelligence (AI). The US military has been using AI for a while now. According to a report by the Associated Press, the US military piloted pint-sized surveillance drones with AI in the Russia-Ukraine war. It has also been using AI to track solider fitness, keeping track of rivals in space and for finding out when Air Force planes need maintenance. It would be interesting to see whether OpenAI and other AI companies would work with the US military and the militaries of other jurisdictions for other such purposes.
Interestingly, this updated usage policy caught the attention of India’s IT Minister, Rajeev Chandrasekhar who said that this was “confirmation that AI can and will be used for military purposes.” He added that this validates India’s stance of regulating AI through a prism of safety, trust and accountability.
Nikhil’s take:
OpenAI has quietly changed its terms to allow it to work with Military and for Warfare. This is a worrying development, especially since OpenAI has scraped a large amount of publicly available data from across the world. While it says that its tech should not be used for harm, that doesn’t mean they can’t be used for purposes that aid military and warfare.
Now how does usage of AI in the military and warfare impact India? I don’t want to be alarmist here but IF this is an indication of intent, some thoughts:
1. No data protection: India’s data protection law has an exemption for publicly available personal data. It’s usage in surveillance, training and strategic planning while microtargeting some people is possible. We made this mistake with the data protection law.
2. Generative AI can be used for analysing large datasets to detect and identify vulnerabilities and strategies for cyberattacks
3. Data of identifiable security personnel is particularly susceptible. For example, location data of security personnel on patrol. Remember the Strava data leak? It can be used for simulation exercises and mission planning. Strava had patrol data in conflict areas because soldiers were using it.
4. Can be used to develop and train autonomous reconnaissance systems
5. Facial data can be used for target recognition
So what can India do?
1. Amend or issue rules restricting the usage of publicly available personal data for AI, or for military and warfare purposes.
2. Discourage the usage of foreign AI tools by military and defence personnel
3. More resources towards developing Indian AI (we’re already doing a good job)
4. Identify what data of Indian citizens has been collected by OpenAI. Subject them to technical scrutiny with respect to datasets, with the option of forcing them to delete datasets that can compromise Indians.
Our openness cannot be our weakness. Again, what I’m writing here is meant to be something to think about. We don’t have clarity on openAI’s intent & we really shouldn’t trust blindly. The onus is on them to assure users & countries where it’s in use, and on our government to seek information to ensure we’re protected.
Also read:
- OpenAI Launches GPT Store, An App Store For Custom Versions Of ChatGPT
- OpenAI Responds To The New York Times Copyright Lawsuit Calling It Meritless
- OpenAI To Establish A Safety Advisory Group For Reviewing Frontier AI Models
STAY ON TOP OF TECH NEWS: Our daily newsletter with the top story of the day from MediaNama, delivered to your inbox before 9 AM. Click here to sign up today!