Data related to Google’s digital services cannot be accessed by third parties under contractual conditions, or as reported, aggregated information, or under a request to port data to another service, Google said in its submission to European Union’s public consultation on its Digital Services Act package. The aim of this consultation is to assess if existing European laws, such as the e-Commerce Directive, need to be amended to account for the challenges posed by the evolving global digital landscape about competition, combating online piracy and counterfeit goods, online content moderation, etc. The public consultation will end on September 8.

However, in its blog post announcing the submission, Google said that allowing users to export their data helps users, and “providing access to aggregated datasets could benefit R&D in a range of industries while safeguarding user data privacy”. It submitted its support for data mobility and access but stressed that product quality or innovation incentives must not be “sacrificed”.

Collaboration between digital platforms, European Commission, and member nations would be required to identify specific use cases where data access or interoperability would promote innovation, Google submitted. It said that sharing certain kinds of data, such as click data (data about how users point and click on different elements on a screen), would in fact give rivals data over how Google responds to search queries, thereby allowing them to clone Google’s search results.

In April 2020, at the onset of the COVID-19 pandemic and announcement of nation-wide lockdowns across the world, Google had released aggregated, anonymised data from Google Maps to track how people’s movement habits have changed since lockdowns were imposed. The aim was to help public health officials make better informed decisions about things such as business hours and deliveries of essential services, and how to time public transport so that social distancing can still be carried out.

In terms of compliance, Google said that different processes and obligations for notifying, detecting and removing illegal content, goods and services is “unduly burdensome”. Similarly, requirements to have a legal representative or an establishment in more than one EU country, and different procedures and points of contact for complying with local laws was “very burdensome” as well.

Google submitted that because of the pandemic, its Q2FY2020 revenues fell by 2% YoY, and that a Single Market (common EU market with same rules throughout the region) would help with its economic recovery.

‘Liability of intermediaries must remain restricted, aka, preserve safe harbour’

EU’s e-Commerce Directive imposed limited liability on online intermediaries that takes into account the degree of knowledge and control that intermediaries have over content on their services. Google submitted that such a graduated regime should continue. This includes: issuing notice and then taking down illegal content, prohibiting general monitoring requirements, incentivising additional action, and retaining the country of origin principle.

The company also submitted that this liability should take into account the functions of different online intermediaries so that the same liability that is imposed on a social media platform is not imposed on an app store or search engine.

Expand services recognised as online intermediaries in the e-Commerce Directive

The e-Commerce Directive currently distinguishes between three types of services offered by online intermediaries: ‘mere conduits’, ‘caching services’, and ‘hosting services’. Google wants these to be expanded to explicitly include other services, and expand the underlying conditions that grant such services safe harbour.

  • Caching services, including search engines and web indexes, should be granted safe harbour and codified as such in the Digital Services Act as they just make “onward transmission of that [stored] information to users” “more efficient”.
  • Create a separate category for cloud service providers and software as a service providers as they do not have actual or contractual control and authority over the content that their clients store or share using their products. The liability, instead, should lie with the violating client who is using the cloud service or the SaaS. Here, Google appears to be making an argument that SaaS providers and cloud service providers are data processors, not fiduciaries, and thus are not liable for the content.
  • Hosting services should be granted safe harbour as long as they act quickly to remove or block access to violative content when they obtain actual knowledge of it. General monitoring of content must be discouraged, the company said.
  • Case law should move away from distinction between “active” and “passive” hosts so that intermediaries that engage in voluntary moderation (“active”) are not deemed to have “knowledge of all content on the platform”. If this continues, intermediaries would either stop voluntary moderation — at the risk of not removing CSAM and terrorist content — or over-remove content (read: censorship). Instead, Google proposes that intermediaries should be given incentives to voluntarily review content that violates one kind of law (such as terrorism) without that meaning that the platform has knowledge of all types of illegal content (such as defamation).

Online content moderation, taking down illegal goods and services

Google wants all online platforms to be legally required to:

  • Maintain an effective ‘notice and action’ system for reporting illegal goods or content, and an effective ‘counter-notice’ system for users to dispute erroneous removal decisions
  • Have appropriately trained and resourced content moderation teams
  • Systematically respond to requests from law enforcement authorities
  • Cooperate with national and law enforcement authorities, according to clear procedures
  • Be transparent about their content policies, measures and their effects

Google wants only platforms at particular risk of exposure to illegal activities by their users to be legally required to:

  • Maintain a system for assessing the risk of exposure to illegal goods or content
  • Request professional users to identify themselves clearly, that is, a KYC policy

Google doesn’t want any online platform to be legally required to:

  • Cooperate with ‘trusted flaggers’, that is, trusted organisations with proven expertise who can report illegal activities for fast analysis
  • Detect illegal content, goods or services
  • Inform consumers when the platform becomes aware of product recalls or sales of illegal goods
  • Cooperate with other online platforms for exchanging best practices, sharing information or tools to tackle illegal activities

Google highlighted that it used a “people + machine” framework to act against illegal offering of goods and services online and content shared by users. For instance, illegal content that is reported by users (using this tool) is individually reviewed by Google since “deciding whether content is illegal under local laws can often be challenging, and highly context-dependent”. To stave off copyright infringement and piracy, Google depends on automated tools such as YouTube Content ID and the platform’s Search demotion signal.

Mechanisms that Google does not have
Google does not provide the following:

  • Information to notice providers about the follow-up on their report
  • Information to buyers of a product which has then been removed as being illegal

For online content moderation, the company now has over 10,000 people across Google. The company says that that these reviewers are provided “robust wellbeing [sic] programs and psychological support”.  The company did not specify the exact cost of the measures it has implemented to remove different types of illegal content, goods and services; it just wrote that they “have invested hundreds of millions of dollars in these efforts” at least twice in its submission.

Problems with automated content moderation, as per Google

  • Over-reliance on automation could impact marginalised social groups disproportionately due to algorithmic bias.
  • Misclassification is a huge challenge.
  • Machine learning tools are vulnerable to adversarial examples, that is, intentional inputs for machine learning models that cause the model to make mistakes.
  • Even though Google uses hashes to catch exact copies of offending content before it available for public view, even the slightest changes in the content — such as its use in a media report — causes the hash to change, and difficult for automated tools to catch the content. Hashes, in this context, can be understood as unique fingerprints of each piece of content that are indexed and used to take down terrorist content or child sexual abuse material (CSAM).

Fighting disinformation

Google has submitted that to fight disinformation, platforms must:

  • Transparently inform users about political advertising and sponsored content, especially around elections
  • Tackle fake accounts, fake engagement, bots and inauthentic user behaviour that amplifies false/misleading narratives
  • Carry out adapted risk assessments and mitigation strategies

Of the other measures that the Commission had proposed, Google submitted that auditing systems overs platforms’ actions and risk assessments, and regulatory oversight and auditing of such assessments was not that necessary.

Google remains non-committal on questions related to competition, market power

Google selected “neither agree nor disagree” for all statements that the questionnaire proposed on digital platforms as gatekeepers of power. These included statements about whether or not consumers had enough choices in online platforms, if they could easily switch between similar online platforms/services, if they could easily port their data to alternate services, if there was interoperability, if there was information asymmetry between platforms and consumers in terms of knowledge about the other, if it was easy for an SME to enter the market, if certain large online platforms created barriers to entry or leveraged assets from one activity to expand into others.

It similarly stuck to a comfortable three stars (on a scale of five) for all characteristics that may determine whether an online company acts as a gatekeeper. It also did not say if the integration of any of these activities — such as search engine, operating systems for smart devices, cloud services, payment services, online advertising, etc., — within one company strengthened a large online platform’s role as a gatekeeper, aka, ‘conglomerate effect’. After CEO Sundar Pichai’s deposition before the US subcommittee on antitrust in July 2020, this lack of answers is rather telling.

It also entered an ambiguous ‘I don’t know’ to questions about whether large online platform companies need dedicated regulatory rules, and if such rules would be effective in curbing gatekeeping (read: anti-competitive) practices of such platforms. It also didn’t say if such rules should include obligations and if a specific regulatory authority is required to enforce those. While the company didn’t know if dedicated rules should enable regulatory intervention against large platforms with case-specific remedies, it said that a separate regulatory authority was not required to enforce such rules since the European Commission’s Directorate General for Competition could address them already.

Google’s recommendations

  • Don’t force services to prioritise speed of content removal: Google submitted that deciding whether content is illegal under local laws is challenging. At times, even when the facts are clear, it is not certain what conclusion the law itself would come to. Case in point: political speech could be used to “unlawfully harass a politician” but its removal could impinge on the speaker’s right to criticise their leaders. In such a case, Google wants to prioritise “careful review” of the rights of all involved over speed of content removal.
  • Introduce notice formalities: Since there is no central repository or way for Google to know who holds the trademark rights for all products listed on the internet, the company has submitted that it is important that trademark holders notify the company so that “meaningful review and action” can occur. Moreover, the company wants the courts to determine whether trademarks have been infringed upon as the nature of such infringements can specific to jurisdiction, services, and goods.
  • Penalise bad faith content removal requests and complaints: Google pointed out that it often receives unreliable or unactionable user requests that are “a deliberate effort to use false claims to suppress speech or valid commercial activity”. People often submit overly-broad or unwarranted removal requests or content takedown requests without proper legal basis. Many also try to abuse the system and report copyright infringement to censor or hinder competitors.
  • Consistent set of rules for all market players so that illegal content doesn’t migrate to less regulated platforms. For instance, terrorist groups have been targeting smaller platforms. It acknowledged that not all services have same level of resources but did not offer a solution for that.
  • Maintain safe harbour protections for platforms so that smaller online platforms can scale rapidly while ensuring user safety.
  • Maintain procedural safeguards around obtaining digital evidence and submitted its support for the European Commission’s proposal for an Electronic Evidence Regulation. It expressed concerns around proposals that seek to circumvent existing legal protections and require internet service providers to give user data to governments without any prior oversight by an independent authority, due process and proper safeguards as that would shift the function of law enforcement from government to private actors.
  • Detecting illegal content, goods or services is a disproportionate move that could amount to a general monitoring obligation in practice.
  • Ensure that internet service providers don’t become mediators between the complainant and content uploader, especially when counter-notices, that is, the process through which a user may dispute removal of their content, are considered. This may need restrictions around who can submit a counter-notice, such as, the content uploader affected by a removal.
  • Apply country of origin principle within the EU to control the spread of illegal goods, services or content across multiple member countries. As per the principle, the law of the country where the content or service originates will be applicable.
  • Governments should focus resources on offline networks to target terrorist content and violent extremism that lead to indoctrination and recruitment. It also recommended that government should use democratic processes to identify and proscribe designated terrorist organisations and individuals, and should invest in programs that target social marginalisation.
  • Government should keep an eye on manufacturing of counterfeit products so that it doesn’t make its way online.
  • Regulation for harmful but legal content should focus on principles rather than specifics since what is appropriate on some sites may be inappropriate on others. Instead of dictating content policies, regulation should require services to formulate “appropriate guidelines, publish them, enforce them, and offer users an opportunity to appeal”.
  • Implement a more friction-based design to report illegal content. Google submitted that to report illegal content, reporters should fill out standardised notice forms that require more information and/or identification to prevent abuse. For violations of Community Guidelines, on the other hand, simple ‘click to flag’ buttons can be made available.
  • Requests for algorithmic transparency must of through consultation with companies and experts. Disclosure of raw code and data may disclose commercially-sensitive information and compromise the privacy of users and integrity of the platforms, Google submitted. It said that exposing the code, even to a very small group, amplifies security risks such as hacking and fraud. This suggestion assumes even more importance in the India context as NITI Aayog, in its proposal for an Indian AI Stack, has suggested placing AI algorithms in the public domain to solve for the problem of algorithmic bias. The draft e-commerce policy, too, has proposed that e-commerce companies should disclose their algorithms to the government to mitigate bias.
  • Sanctions on platforms that don’t comply must not chill speech or restrict lawful content.

Google in numbers

  • According to Google’s transparency report on compliance with European privacy law, since the Court of Justice of the European Union ruled in May 2014 that individuals can ask search engines like Google to delist search results on the basis of a person’s name, the platform has received 965,129 requests to delist almost 3.8 million URLs. Of these URLs, Google has delisted 46.5% or 1.76 million URLs.
  • In compliance with Germany’s Network Enforcement Law, colloquially called NetzDG, the company has been publishing a biannual transparency report. Between January and June 2020, the company did not remove or block more than 76% of content reported under NetzDG as it concluded that it did not violate Google’s Community Guidelines or the 21 criminal statutes referred to in NetzDG.
  • In 2019, Google shut down about 12,000 Google Ads accounts with more than 10 million ads to trying to advertise counterfeit goods. Over 99% of these accounts were detected using automated, machine learning-based tools. For violative ads not detected automatically, the company claims that it responds to “reliable” Google Ads counterfeit complaints within 24 hours.
  • Google has received over requests for over 4.7 billion URLs to be delisted for copyright infringement, as per its transparency report.
  • As per its annual Bad Ads report, it blocked and removed more than 2.7 billion bad ads in 2019 of which more than 35 million were phishing ads and 19 million were “trick-to-click” ads. It suspended nearly 1 million advertiser accounts for violating Google policies and terminated over 1.2 million publisher accounts and removed ads from over 21 million web pages that are a part of its publisher network.
  • In 2019, Google Play stopped over 790,000 policy-violating apps before entering the Play store.
  • In 2019, Google Maps detected and removed more than 75 million policy-violating reviews and 4 million fake business profiles along with 580,000 reviews and 258,000 business profiles that were reported to Google Maps. It also disabled 475,000 abusive user accounts.

Read more: