Following our incisive discussion last week on the Governance of Non Personal Data, based on the Revised Draft Report on Non Personal Data from MEITY’s committee of experts, we’ve summarised key challenges and recommendations shared by our speakers and participants.
You may download your copy of the report here. Some excerpts from the report:
Stated objectives and their validity: If the objective of the proposed framework is to give an advantage or fillip to Indian startups, then the motive should not be shrouded in the cloak of ‘community’ benefit. It’s unclear how an institutional setup will help Indian startups; companies cannot be simply expected to hand over datasets to competitors simply because they have been ordered to do so.
“If the objective is that we are regulating because we want to privilege Indian startups over any other kind of entity, then we need to say that upright, and stop guising it in the word ‘community’, because then that gives a very different kind of perception,” — Beni Chugh, Dvara Research
Dispute resolution mechanisms: If a data requester finds that the data custodian has not met their demand and approaches the NPDA, then it’s unclear how the regulator would balance competing claims. Because on one hand, the custodian may have defined access as data sharing may not be in the community’s best interest; on the other hand, the requesting entity would be adamant on having access.
Conflicts with other regulatory bodies: An Non-Personal Data Authority may conflict with other institutions, such as the forthcoming Data Protection Authority of India or the Competition Commission of India (CCI); it will need to be in constant deliberations with such organisations to succeed in its goals. Given the inherent inter-sectoral nature of the datasets and the concerned stakeholders, all decisions would rely on continuous communication and consultation between regulators.
Potential conflict in goals: Further, agencies that regulate non-personal data and personal data could be dealing with a potential conflict of goals, with the former being concerned with “unlocking economic value” and the latter looking to protect privacy.
Anonymisation will carry privacy risks: The risk of re-identification always exists with anonymised personal data. Simply applying one level of anonymisation and calling it Non Personal Data (under the current framework) thereafter does not take away the existing privacy risks. The anonymised data may not even be sufficiently ring-fenced from privacy litigation.
Anonymisation option exposes companies to risk: FinTech companies that use data for analytics don’t necessarily need personally identifiable information (PII), and prefer to anonymise their data to reduce exposure to risk. These companies could also use a third-party to process analytics, and they wouldn’t want to share personal information.
“This puts an additional barrier for me because I now require consent and this actually makes the data which I am storing more secure. And why should there be a barrier for that?” — Prasanto K. Roy, FTI Consulting
More challenges and recommendations in the report.
Also note: we’re hosting a discussion on Data Policies and Artificial Intelligence on Thursday, 28th January, 2021.
- MediaNama’s discussion on the Governance of Non Personal Data was supported by Microsoft and Facebook.
- MediaNama is hosting the discussion on Data Policies and Artificial Intelligence with support from Flipkart, Facebook and Microsoft. The Centre for Internet and Society is our community partner for this discussion.