Some of the “blunt strategies” proposed in India’s Non-Personal Data Report could harm Indians, isolate Indian companies from their global counterparts, and lead other countries to “retaliate” with similar “data nationalisation” measures that would be counterproductive to India’ interests”, Mozilla said in its comments to the non-personal data report by MEITY’s committee of experts. While the report focuses on enabling access to data for the Indian government and businesses, “ensuring the privacy and security of this data is merely noted as an afterthought in most instances”. Before setting a framework for non-personal data, the government should prioritise the passage of a strong data protection law, accompanied by reform of government surveillance.

“Ultimately, a maximalist focus on boosting domestic industry could hurt the very businesses it is meant to serve, while limiting competition, and diminishing the choices of users. These impacts would be magnified if another country were to enact a similar regime on Indian firms,” Mozilla noted.

Issues: categories a problem, non-personal data includes trade secrets

  • Treating data as a national resource is problematic: This approach not only undermines the Puttaswamy judgment, but paves way for regulations “that undermine individual rights and is in stark contrast to the expectations of Indian users”. Replacing the fundamental right to privacy with the ownership notion, which itself can be “easily divested by state and non-state actors”, “leaves individual autonomy in a precarious position”, Mozilla said. “How can one have individual autonomy when one’s privacy may be violated by virtue of being a member of a community?” it asked.
  • Non-personal data constitutes trade secrets, can hurt privacy: Non-personal data can constitute trade secrets and insights derived from such data may be protected by intellectual property laws. “Both raise concern around the fundamental right to carry out business and India’s obligations under international trade law,” Mozilla said. “Turning over this information to the government or private entities without any checks and balances also raises significant privacy concerns. Information about sales location data from e-commerce platforms, for example, can be used to draw dangerous inferences and patterns regarding caste, religion, and sexuality,” it said.
  • Mandating data-sharing with smaller firms needs a strategic approach: Mandating large firms to share aggregate data with smaller firms can be done in some cases “on reasonable market terms”, but it needs to be approached strategically. Mozilla said it could be done though “a targeted incentive-driven framework, rather than imposing blanket coercive measures that will alienate global companies and likely raise legal challenges”.
  • Localising data in servers in India is more likely to make data more susceptible to overbroad access by law enforcement and surveillance agencies, especially in absence of safeguards against surveillance in India. “Moreover, storing a copy of all personal data pertaining to Indians in a handful of locations could create a “honey pot” for malicious actors, thereby increasing the risk of a breach with a profound effect on India’s citizenry. “
  • Defining non-personal data as exclusionary is contradictory: The committee’s approach is that everything that is not personal data under the Personal Data Protection Bill is non-personal data. This outlook ignores the flaws with this binary approach. Modern re-identification methods make classification irrelevant as such anonymised or non-personal information can be linked back to individuals by using alternative data sources.
  • Consent for usage of non-personal data: The report states that “Personal Data that is anonymized should continue to be treated as the Non-Personal Data of the data principal”, Mozilla said (for context, the report also recommends that the data principal provide their consent for anonymisation and usage of anonymised data). This is inherently contradictory, as anonymisation should make it legally and practically infeasible to be able to distinguish between one data principal and the other, Mozilla said.
    • “Such a move may force data custodians to not anonymise data sufficiently to be able to track such consent, placing such data at an additional privacy and security risk. Alternatively, it would require the re-identification of the principal in an anonymized dataset. Doing either of these things would defeat the whole purpose of the anonymization in the first place,” it submitted.
  • Absence of protections: The report acknowledges that no anonymisation technique is foolproof, and it can bring harm to individuals, groups, and communities. But it does not detail what the privacy protections could be, and does not elucidate on how these protections could be applied or enforced for non-personal data.
    • “Without these protections, there is nothing to prevent the government or any business (foreign or domestic) from exploiting non-personal data in ways that contravene the autonomy and dignity of the individual in question. Therefore, any regulatory framework for non-personal data needs to be more focused on how such data will be protected meaningfully, rather than focusing on how it should be exploited for national interest.
  • Classifying non-personal data is unrealistic: Further classifying non-personal data into community, public, and private non-personal data is “reductive and removes nuance about datasets in the real world”. The classifications are neither mutually exclusive to each other nor do they provide clarity in how they would operate in practice.
  • Definition of community data is wide: The definition of community data is wide-ranging and ill suited to a framework being designed to protect people. “Under this classification, religious groups; people from the same educational institutions; vulnerable communities based on class, caste, and economic criteria; and people who once lived in a residential locality, are all valid communities with enforceable data rights. They can all have conflicting interests over data that they may have shared with government and private platforms.”
    • Without a guiding legal principle, companies or service providers will be forced to make “legally binding decisions on what is a valid community, what is the scope of data that can or cannot be shared with such communities, and how to resolve disputes between competing claims to represent a community’s interest”.
  • Data trusts and data cooperatives can give rise to their own issues: While they have the potential to provide guardrails for trustworthy data management, but they can also aggravate, or create new and complex challenges of their own. If data trusts act as custodians, it will require “impeccable levels of security and data management” since centralizing data in a trust broadens the attack surface for abuse and misuse. There is also a question of how trusts would legally interact in a global economy and society, given that some jurisdictions may rely on individual rights, while others on collective rights.

Recommendations

  • Enforce a data protection law first: A data protection law should be in place to ensure that personal data under the draft data protection law is excluded from any data sharing. A policy for non-personal data will also need to “mitigate the risks of inadvertent re-identification of individuals through combining apparently anonymized data points”.
  • Identify the most important datasets: Rather than have a blanket coercive measure to open up datasets, the government should identify those aggregate and anonymised datasets that would be “most valuable to new, nascent business”. For instance, Uber has chosen to release anonymised aggregate data to improove planning in India and other countries.
  • Competition policy should encourage designing for interoperability and standards-centric design and implementation. This could include coupling positive incentive “carrots”, including potential safe harbours with corresponding “sticks” of heightened merger review standards; and strengthened enforcement of rules and policies against anti-competitive behaviour by firms.
  • The classification of non-personal data into categories should be “completely redone”, “with a comprehensive public consultation that uses evidence based research to create a new classification”.

*

Read more:

  • Summary: Report on Non-Personal Data Framework released by MEITY’s Committee of Experts [read]
  • Mandatory sharing of non-personal data will undermine innovation: BSA on non-personal data report [read]
  • Non-Personal Data Report ‘sound in premise but murky in detail’, says iSPIRT Foundation [read]