wordpress blog stats
Connect with us

Hi, what are you looking for?

The EU’s 44 Commitments On Tackling Online Disinformation Sharpen Self-Regulation Approach

The EU’s 2022 Code of Practice on Disinformation is aimed at limiting and monitoring disinformation and opaque political advertising in the bloc

‘Russia’s information, or rather disinformation war, clearly accompanies its military offensive in Ukraine. It’s just the latest reminder how dangerous for democracies disinformation and information manipulation can be,’ read the European Commission’s militaristic press release to the strengthened 2022 Code of Practice on Disinformation

Sharpening the focus of its previous iteration from 2018, the 2022 Code seeks to deploy a collaborative, democratic, and ‘European’ approach to combating online disinformation creeping across the member States of the European Union (EU). It lays out a voluntary, self-regulatory mechanism for entities to fight online disinformation with. The EU has been concerned by both COVID-19-related misinformation of late, as well as Russian ‘propaganda’ online since the onset of the Russia-Ukraine war earlier this year.

Disinformation is defined in the report to be inclusive of ‘misinformation, disinformation, information influence operations and foreign interference in the information space [as defined in the European Democracy Action Plan].’ 

Applicable to Big Tech platforms, social media companies, digital advertisers, fact-checkers, and civil society, the Code recognises the multifaceted nature of disinformation online. Reinforcing that ‘all stakeholders in the [disinformation] ecosystem have roles to play in countering its spread,’ across 9 chapters and 40 pages, the Code lays out its ambit in granular detail: listing 44 detailed and diverse commitments to combating disinformation. The Code of Practice aims to eventually become a Code of Conduct under the EU’s Digital Services Act.

Each commitment is accompanied by a range of procedures to implement it. This includes metrics for signatories on how to record, assess, and submit their individual impact on combating disinformation. While deepening monitoring mechanisms and collaborative research, the Code also states that combating disinformation must be balanced against the Fundamental Rights of EU citizens.

Advertisement. Scroll to continue reading.

Why it matters: Europe’s hits and misses with the 2022 Code, and other burgeoning technology legislation, may offer critical insight for policymakers around the world tackling a tsunami of spurious information online—including India.


Never miss out on important developments in tech policy, whether in India or across the world. Sign up for our morning newsletter, with a “Free Read of the Day”, to experience MediaNama in a whole new way.


What Does the Code Mean by ‘Disinformation’?

The terms included in the scope of disinformation are defined under the European Democracy Action Plan as follows: 

  • Misinformation: ‘false or misleading content shared without harmful intent though the effects can still be harmful.’
  • Disinformation: ‘false or misleading content that is spread with an intention to deceive or secure economic or political gain and which may cause public harm.’
  • Information influence operations: ‘coordinated efforts by either domestic or foreign actors to influence a target audience using a range of deceptive means, including suppressing independent information sources in combination with disinformation.’
  • Foreign interference in the information space: ‘carried out as part of a broader hybrid operation, [it]  can be understood as coercive and deceptive efforts to disrupt the free formation and expression of individuals’ political will by a foreign state actor or its agents.’

Who is Party to the Code?

The Code is voluntary—which means a relevant stakeholder can sign up for specific commitments to combat disinformation relevant to the services they provide. As of June 23rd, there were 33 signatories to the Code, ranging from large tech companies, to smaller advertising entities to fact-checking bodies. These include:

  • Big Tech platforms: Meta, Google, Microsoft
  • Social media platforms: Twitter, Clubhouse, Vimeo, Clubhouse, TikTok 
  • Software companies: Adobe
  • Fact-checkers: Faktograf (from Croatia), Demagog Association (from Poland), Pagella Politica (from Italy)
  • Disinformation and/or media monitors: Newsback, Newsguard, Global Alliance for Responsible Media (GARM)
  • Advertising Stakeholders: European Association of Communication Agencies, Interactive Advertising Bureau, MediaMath
  • Civil Society: Reporters Without Borders, Globsec, VOST Europe, DOT Europe 

Notable absences from this list, as of now, are Apple and Amazon

Once signed, signatories have six months to implement the procedures of the commitments they’ve signed up for. After that, they have to submit a report on their baseline findings to the Commission—which is likely to receive the first of these reports by January 2023. 

Advertisement. Scroll to continue reading.

What Are Signatories to the Code Committing To?

Broadly speaking, the Code’s commitments include:

  • Deeper scrutiny of ad placements: this is pertinent to those placing and hosting ads in Europe. Commitments include demonetising the dissemination of disinformation through ads, ensuring that advertising systems are not used to disseminate disinformation, and cooperating with other players in the advertising ecosystem to strengthen best practices on the scrutiny of ads.
  • Transparent political advertising: this is relevant not only to those placing or hosting ads but to citizens and democratic institutions too. Commitments include working towards a common definition of ‘political and issue advertising’. Such ads should be clearly labelled in a simple way so that users can distinguish them easily—users should also be intimated as to why they are seeing the ad. Robust identification systems should be in place for third-party advertisers placing such ads. Signatories should also maintain an up-to-date repository of all such ads hosted for up to five years—they should develop application program interfaces (API) for these repositories for users and researchers to search within, with a specified minimum benchmark of search criteria. They should commit to engage with ongoing research on disinformation.
  • ‘Integrity of services’ developed: there is little clarity on the benchmarks that distinguish disinformation from other types of information. Commitments include working towards developing a set of ‘impermissible practices’ when it comes to manipulative behaviour and subsequent policies to combat them. Signatories should publicly display the services that are impermissible on their platforms. Signatories developing or operating Artificial Intelligence systems commit to transparently publishing their policies to counter prohibited ‘manipulative practices for AI’—as defined under the proposed Artificial Intelligence Act. Signatories can commit to sharing incidents that took place on their platforms with other stakeholders to prevent their resurgence. 
  • Empowering Users with Reliable Information: this addresses the (dis)information asymmetry users of the Internet are usually at the receiving end of. Commitments include improving media literacy and critical thinking, in line with the Commission’s Digital Education Action Plan. ‘Safe design practices’ can be deployed to reduce the viral transmission of disinformation online. Signatories can commit to providing users with tools to assess the accuracy, trustworthiness, authenticity, and edit history of information—as well as providing access to fact-checking tools. Users should be able to flag content violating a signatory’s terms of service, and appeal the takedown of their content through a process that is timely, transparent, and objective. Respecting encryption and privacy, instant messaging applications should empower users to assess the authenticity of the information in messages they receive through specific features like labels. 
  • Empowering Researchers: this is relevant to the wide range of independent stakeholders looking to study disinformation. Commitments include providing researchers with up-to-date non-personal data or anonymised manifestly-made public data through APIs. Relevant signatories can commit to support good-faith research on disinformation involving their services, while those conducting research should do so transparently, ethically, and collaboratively.
  • Empowering Fact-Checkers: these commitments aim to foster collaborations between the silos that platforms and fact-checkers currently find themselves operating in. Commitments include establishing a transparent, sustainable relationship with fact-checkers to integrate their work into a platform’s services. Signatories can also commit to providing fact-checkers with up-to-date and possibly automated information to improve the quality of scope of their fact-checking. 
  • Transparency Centre: this is relevant to institutions and members of the public assessing the signatory’s success at implementing the Code. Signatories can commit to developing a public ‘Transparency Centre’ website detailing their efforts to implement the Code in simple language. This website should be regularly updated.
  • Permanent Task Force:  a Task Force can improve ‘the effectiveness of the Code and its monitoring [across stakeholders].’ To achieve this, Signatories promise to participate in the permanent Task Force of the Code—which includes signatories and members of the European Digital Media Observatory and European Regulators Group for Audiovisual Media.  
  • Monitoring Implementation: Signatories commit to ensuring adequate financial resources and personnel are set aside to implement the Code. Within one month of the six-month implementation period, signatories commit to submitting a baseline report on their performance. They commit to working with the Task Force to develop structural indicators to assess implementation—and to submit a report on them nine months from the signing of the Code. In special situations, they may share appropriate data with the Commission with the rapid response system of the Task Force. Very Large Online Platforms (reaching 45 million users) commit to being independently audited by a trustworthy entity on their compliance—in order to meet the requirements of the Digital Services Act

How Will the Code Be Enforced?

Under the Code, ‘Very Large Online Platforms’ that reach 45 million users will be ‘underpinned‘ by the Digital Services Act (DSA). The Act obliges them to prevent the systemic risks inherent to their operations, which in this case, include the spread of disinformation. Companies face penalties of up to 6% of their global turnover for infringing the DSA—which may certainly act as an effective enticement to ensure compliance with the Code in the future. ‘The DSA will make large platforms put our society’s well-being first and their business interests second,’ adds the press release on the 2022 Code.

According to a FAQ sheet released by the Commission, the 2022 Code aims to become a Code of Conduct under the co-regulatory approach adopted by the DSA. The Commission adds that the Code complements upcoming legislation on transparency in political advertising. This may strengthen the Code’s regulatory apparatus.

For smaller entities, whether linking the Code to the Act will ensure compliance remains to be seen—as they may not be held to the same standards as larger platforms. Commentators have suggested that this may result in smaller entities not doing enough to tackle disinformation.

What Does the Code Build On?

Advertisement. Scroll to continue reading.

In 2018, the European Union released the first Code of Practice of Disinformation—a document that lies at the heart of its efforts to combat disinformation online. The Code was co-conceptualised by the EU and various stakeholders in the bloc’s IT sector. 

It prescribed a self-regulatory mechanism that entities could voluntarily subscribe to combat the spread of disinformation online. Some of the policy steps it took included improving transparency in political advertising, and demonetising advertisers spreading disinformation.  

The 16 signatories to the 2018 Code included Facebook, Twitter, Google, Mozilla, and TikTok. Multiple implementation reports were submitted by the signatories over the course of 2019 and 2020. 

Public critique at the time also described the Code as ‘too broad-brush’ to have a significant impact on combating disinformation. Simply put: it lacked the specificity needed to effectively combat disinformation, which entities often used to their advantage. This sentiment was echoed by the Commission in its September 2020 report assessing the implementation of the 2018 Code. While the Commission found that the Code provided a sound structural foundation to combating disinformation online, it also suffered from various infirmities raised by the signatories. A key issue raised: ‘it remains difficult to precisely assess the timeliness and impact of signatories’ actions, as the Commission and public authorities are still very much reliant on the willingness of platforms to share information and data.’ 

In May 2021, Thierry Breton, the EU’s Internal Market Commissioner, commented that no major tech company had ‘really fully respected the code.’ Reports suggested that Twitter had complied with its tenets much more than Google, Facebook, Microsoft, or TikTok.

How Did the 2020 European Commission Report Recommend Strengthening the Code?

Advertisement. Scroll to continue reading.

Given that the Code has to be implemented across EU States, the 2020 report suggested that it would benefit from providing clear shared definitions of critical terms. Clearer commitments to combating disinformation—as well as well-defined impact metrics—would aid and assess the execution of the Code by signatories. Widened access to data on disinformation would help independent bodies study the phenomenon better. 

In May 2021, the Commission suggested further measures that may be taken to strengthen it. These included ensuring wider participation from relevant stakeholders, stronger measures to demonetise disinformation, developing a more robust monitoring framework, and improving researchers’ access to data on disinformation. 

Various signatories to the Code worked with the Commission to redraft it after this period. On June 16th, the 2022 iteration of the Code was released. ‘We now have very significant commitments to reduce the impact of disinformation online,’ said Věra Jourová, Vice-President for Values and Transparency at the European Commission, on the 2022 Code’s release. ‘[We have] much more robust tools to measure how these are implemented across the EU in all countries and in all its languages.’ 


This post is released under a CC-BY-SA 4.0 license. Please feel free to republish on your site, with attribution and a link. Adaptation and rewriting, though allowed, should be true to the original.

Read More

Advertisement. Scroll to continue reading.
Written By

MediaNama’s mission is to help build a digital ecosystem which is open, fair, global and competitive.

Views

News

Studying the 'community' supporting the late Sushant Singh Rajput (SSR) shows how Twitter was gamed through organized engagement

News

Do we have an enabling system for the National Data Governance Framework Policy (NDGFP) aiming to create a repository of non-personal data?

News

A viewpoint on why the regulation of cryptocurrencies and crypto exchnages under 2019's E-Commerce Rules puts it in a 'grey area'

News

India's IT Rules mandate a GAC to address user 'grievances' , but is re-instatement of content removed by a platform a power it should...

News

There is a need for reconceptualizing personal, non-personal data and the concept of privacy itself for regulators to effectively protect data

You May Also Like

News

Google has released a Google Travel Trends Report which states that branded budget hotel search queries grew 179% year over year (YOY) in India, in...

Advert

135 job openings in over 60 companies are listed at our free Digital and Mobile Job Board: If you’re looking for a job, or...

News

By Aroon Deep and Aditya Chunduru You’re reading it here first: Twitter has complied with government requests to censor 52 tweets that mostly criticised...

News

Rajesh Kumar* doesn’t have many enemies in life. But, Uber, for which he drives a cab everyday, is starting to look like one, he...

MediaNama is the premier source of information and analysis on Technology Policy in India. More about MediaNama, and contact information, here.

© 2008-2021 Mixed Bag Media Pvt. Ltd. Developed By PixelVJ

Subscribe to our daily newsletter
Name:*
Your email address:*
*
Please enter all required fields Click to hide
Correct invalid entries Click to hide

© 2008-2021 Mixed Bag Media Pvt. Ltd. Developed By PixelVJ