SFLC.in has published a detailed analysis of the future of safe harbor, titled Intermediary Liability 2.0, A Shifting Paradigm. Below is an excerpt from the report, published under C BY-SA-NC 4.0, republished with permission. You may download the report here.

Increase in the number of users of online platforms that allow sharing of user generated content coupled with a lack of media literacy have led to an explosion of harmful content ranging from hate propaganda to disinformation to revenge porn and child pornography. Targeted messages aimed at manipulating democratic processes as seen in the 2016 US Presidential election and the 2018 Brazil elections led to greater scrutiny of the accountability of platforms over user generated content, often focusing on the technology rather than systematic interventions.

Platforms like Facebook are no longer passive players like blogging platforms or web hosts and they decide the reach of content and ranking. What users see on their feeds are determined by their past browsing habits and their posts and shares. Thus, platforms have a major role to ensure that their platforms are safe and that the spread of disinformation is contained. The initiative of intermediaries in working together with fact checkers across the world is a positive move and will improve the trust of users in the content shared.

Although law is often said to be lagging technology, recent developments have shown that content platforms were slow in identifying the root causes that led to the rise of disinformation on the platforms. Intermediaries could have been faster to re-act to the problem of harmful messages on their platforms which led to harm in the offline world including incidents of physical violence. This inaction has contributed to a decrease in trust of users on the plat-forms in the recent past.

The trust deficit of online platforms and incidents attributed to harmful content spread online have been used by Governments in various countries as excuses to justify new regulations that seek to control information on these platforms. Whether it is NetzDG law in Germany or mandatory back-doors as per the new Australian law or the proposed amendment to the Intermediary Rules in India, the underlying narrative has been the need to control harmful content spread on social media platforms.

In India, the Shreya Singhal judgment has given intermediaries the much needed certainty on the requirements for enjoying safe-harbour protection. However, the pro-posed amendments to the Intermediaries Guidelines Rules endangers this protection.

Attempts at regulating intermediaries by weakening encryption or by mandating automated take-down on a broad range of content deemed to be harmful will be counterproductive and will affect the fundamental rights of free speech and privacy guaranteed to citizens. However, a laissez faire approach permitting intermediaries complete freedom is also not advisable as the real-world harm caused by illegal con-tent cannot be ignored.

Governments should be free to mandate intermediaries to ensure quick resolution of legitimate takedown requests and to have in place governance structures and grievance mechanisms to enable this.

Although intermediaries can explore technology solutions like Artificial Intelligence tools to flag harmful content, there should be more investments in human moderation. For a country like India with multiple languages and diverse cultures, AI tools have their limitations and platforms will have to invest in people and resources to make the online world a safe space.

Intermediaries need to show more commitment to keep the platforms safe and secure. The oversight board proposed by Facebook is a step in the right direction. However, there are no quick fixes to the enormous problem of harmful content.

Based on discussions with various stake-holders over a series of interviews and roundtables, the recommendations can be summarized as:

Recommendations for Government:

  • Laws on intermediary liability should provide a clear guidance on type of content that is deemed to be illegal.
  • The Notice and action procedure should protect the rights of users and should not be ambiguous.
  • The law should mandate governance structures and grievance mechanisms on the part of intermediaries enabling quick take-down of content determined as illegal by the judiciary or appropriate Government agency.
  • The right to privacy of users should be protected and there should not be any mandate forcing intermediaries to weaken encryption or provide back-doors.•Government should work with Intermediaries to educate users on identifying disinformation and in secure use of the Internet.
  • The Government should formulate training programmes on technology law for lower judiciary so that the developments in jurisprudence in this area are disseminated.

Recommendations for Intermediaries:

  • Intermediaries should invest resources to ensure that their platforms are safe and secure for users.
  • Technology including Artificial Intelligence has its limitations and there should be proper safeguards to ensure that auto-mated tools do not lead to taking down of legitimate content.
  • Governance structures and grievance redressal mechanisms have to be instituted to resolve legitimate requests from the judiciary and the Government.
  • Intermediaries need to work closely with fact checkers and mainstream media to reduce spread of disinformation on their platforms. There should be greater investments on resources and human moderation to cover content in regional languages.
  • There should be greater cooperation between intermediaries to flag extreme content like terrorist content and child pornography.
  • The “filter bubble” effect where users are shown similar type of content results in users not being exposed to opposing views and debates resulting in them becoming easy targets of disinformation. Intermediaries should work on reducing the echo chamber effect so that posts that are flagged as disinformation do not become viral. 58

*

Copyright 2019 SFLC.in. Licensed under CC BY-SA-NC 4.0