wordpress blog stats
Connect with us

Hi, what are you looking for?

Summary: Tech companies adopt self-regulatory code to tackle harmful online content in New Zealand

Tech companies Meta, Google, TikTok, Twitch,and Twitter have adopted a self-regulatory code to tackle harmful online content in New Zealand

Tech companies Meta (Facebook and Instagram), Google (YouTube), TikTok, Amazon (Twitch), and Twitter on July 25 adopted a self-regulatory code of practice that obligates them to actively reduce harmful content on their relevant digital platforms and services in New Zealand, internet-safety group Netsafe, which collaborated with social media platforms and other industry and government stakeholders to develop the code, said in a press release.

“We are constantly finding responsive ways to keep pace with the potential threats posed by technology and bridge regulatory gaps. Everyone deserves to be safe online and industry codes are one means to support that to happen. Ultimately addressing these important issues while protecting freedom of expression, will require a whole of society approach and we think this Code is a step in the right direction.” — Graeme Muller, CEO of NZTech, a New Zealand based tech organisation

The code outlines the collective and voluntary commitments made by platforms and requires each company to publish annual reports about their progress in adherence to the code. Platforms are also subject to sanctions for breaches of their commitments and must take part in a public grievance redressal mechanism.

Why does this matter: By self-regulating, tech companies in New Zealand are dodging the alternative of government regulation of harmful online content, which would have probably been more onerous. Moreover, government regulation raises questions of censorship, as seen in India with the IT Rules 2021 and the recent amendments proposed to them, which self-regulation might be able to avoid. Separately, in India, Meta, Twitter and Google, are drawing up the structure for a self-regulatory body, Economic Times reported on July 27. This comes after the minister for state for IT Rajeev Chandrasekhar said that the government is open to the creation of a self-regulatory grievance redressal appellate body in place of the government’s proposed Grievance Appellate Committee (GAC), which platforms have strongly opposed. The code adopted in New Zealand could serve as a model for tech companies in India.


Never miss out on important developments in tech policy, whether in India or across the world. Sign up for our morning newsletter, with a “Free Read of the Day”, to experience MediaNama in a whole new way.


What are the commitments outlined in the code?

The Aotearoa New Zealand Code of Practice for Online Safety and Harms outlines a set of four commitments and corresponding outcomes and measures. Signatories are required to specify which commitments, outcomes and measures are most relevant to them, and for each measure, either an initial assessment of practices being undertaken or an explanation as to why the measure is not being implemented.

1. Reduce the prevalence of harmful content online

Provide safeguards to reduce the risk of harm arising from online child sexual exploitation and abuse (CSEA)

Advertisement. Scroll to continue reading.
  • Implement and enforce policies that seek to prevent known child sexual abuse material from being made available to users or accessible on their platforms and services.
  • Implement and enforce policies that seek to prevent search results from surfacing child sexual abuse material.
  • Implement and enforce policies that seek to adopt enhanced safety measures to protect children online from peers or adults seeking to engage in harmful sexual activity with children (e.g. online grooming and predatory behaviour).
  • Implement and enforce policies that seek to reduce new and ongoing opportunities for the sexual abuse or exploitation of children.
  • Work to collaborate across industry and with other relevant stakeholders to respond to evolving threats.

Provide safeguards to reduce the risk of harm arising from online bullying or harassment

  • Implement and enforce policies that seek to reduce the risk to individuals (both minors and adults) or groups from being the target of online bullying or harassment.
  • Implement and enforce policies to mitigate the risk of individuals or groups being the target of online bullying or harassment.
  • Implement and raise awareness of policies and tools for users to report online bullying or harassment content.
  • Support programs, initiatives or features that seek to educate and raise awareness on how to reduce or stop online bullying or harassment.

Provide safeguards to reduce the risk of harm arising from online hate speech

  • Implement and enforce policies that seek to prohibit or reduce the prevalence of hate speech.
  • Implement and maintain products and tools that seek to prohibit or reduce the prevalence of hate speech.
  • Implement and raise awareness of policies and tools for users to report potential hate speech.
  • Support programs and initiatives that seek to encourage critical thinking and educate users on how to reduce or stop the spread of online hate speech.
  • Work to collaborate across industry and with other relevant stakeholders to support efforts to respond to evolving harms arising from online hate speech.

Provide safeguards to reduce the risk of harm arising from online incitement of violence

  • Implement and enforce policies that seek to prohibit or reduce the prevalence of content that potentially incites violence.
  • Implement and maintain products and tools that seek to prohibit or reduce the prevalence of content that potentially incites violence.
  • Implement and raise awareness of product or service-related policies and tools for users to report content that potentially incites violence.
  • Support programs and initiatives that seek to educate users on how to reduce or stop the spread of online content that incites violence.
  • Work to collaborate across industry and with other relevant stakeholders to support efforts to respond to evolving harms arising from online content that incites violence.

Provide safeguards to reduce the risk of harm arising from online violent or graphic content

  • Implement and enforce policies that seek to prohibit and reduce the spread of violent or graphic content online.
  • Implement and enforce policies that seek to prohibit and reduce the spread of violent or graphic content.
  • Implement and raise awareness of policies and tools for users to report potential violent and graphic content.

Provide safeguards to reduce the risk of harm arising from online misinformation

  • Implement and enforce policies that seek to reduce the spread of online misinformation.
  • Implement and enforce policies that seek to penalise users who repeatedly post or share misinformation that violates related policies.
  • Support media literacy programs and initiatives that seek to encourage critical thinking and educate users on how to reduce or stop the spread of misinformation.
  • Support programs that seek to facilitate civil society, fact-checking bodies and other relevant organisations working to combat misinformation.
  • Work to collaborate across industry and with other relevant stakeholders to support efforts to respond to evolving harms arising from misinformation.

Provide safeguards to reduce the risk of harm arising from online disinformation

  • Implement and enforce policies that seek to suspend, remove, disable, or penalise the use of fake accounts that are misleading, deceptive or may cause harm.
  • Implement and enforce policies that seek to remove accounts, (including profiles, pages, handles, channels, etc) that repeatedly spread disinformation.
  • Implement and enforce policies that seek to provide information on public accounts (including profiles, pages, handles, channels, etc) that empower users to make informed decisions (e.g. date a public profile was created, date of changes to primary account information, number of followers).
  • Implement and enforce policies that seek to provide transparency on paid political content (e.g. advertising or sponsored content) and give users more context and information (e.g. paid political or electoral ad labels or who paid for the ad).
  • Implement and enforce policies that seek to disrupt advertising or reduce economic incentives for users who profit from disinformation.
  • Work to collaborate across industry and with other relevant stakeholders to support efforts to respond to evolving harms arising from disinformation.

2. Empower users to have more control and make informed choices

Users are empowered to make informed decisions about the content they see on the platform

  • Implement and enforce policies that help users make more informed decisions on the content they see.
  • Implement and enforce policies that seek to promote accurate and credible information about highly significant issues of societal importance and of relevance to the digital platform’s user community (e.g. public health, climate change, elections).
  • Support programs that educate or raise awareness on disinformation, misinformation and other harms, such as via media/digital literacy campaigns.

Users are empowered with control over the content they see or their experiences and interactions online

  • Implement and enforce policies that seek to provide users with appropriate control over the content they see, the character of their feed or their community online.
  • Launch and maintain products that provide users with controls over the appropriateness of the ads they see.

3. Enhance transparency of policies, processes and systems

Transparency of policies, systems, processes and programs that aim to reduce the risk of online harms

  • Publish and make accessible for users Signatories’ safety and harm-related policies and terms of service.
  • Publish and make accessible information (such as via blog posts, press releases and/or media articles) on relevant policies, processes, and products that aim to reduce the spread and prevalence of harmful content online.

Publication of regular transparency reports on efforts to reduce the spread and prevalence of harmful content and related KPIs/metrics

  • Publish periodic transparency reports with key performance indicators (KPIs) showing actions taken based on policies, processes and products to reduce the spread or prevalence of harmful content (e.g. periodic transparency reports on global removal of policy-violating content).
  • Submit to the Administrator (defined below) an annual compliance report that sets out the measures in place and progress made in relation to Signatories’ commitments under the code.

4. Support independent research and evaluation

Independent research that helps build an understanding of the impact of safety interventions and harmful content on society or research on new technologies to enhance safety or reduce harmful content online

  • Support or participate in programs and initiatives undertaken by researchers, civil society and other relevant organisations (such as fact-checking bodies). This may include broader regional or global research initiatives undertaken by the Signatory which may also benefit Aotearoa New Zealand.
  • Support or convene at least one event per year to foster multi-stakeholder dialogue, particularly with the research community, regarding one of the key themes of online safety and harmful content.

Support independent evaluation of the systems, policies and processes that have been implemented in relation to the Code.

  • Commit to selecting an independent third-party organization to review the annual compliance reports submitted by Signatories, and evaluate the level of progress made against the Commitments, Outcomes and Measures, as outlined in above, as well as commitments made by Signatories.

How will compliance be ensured?

The code provides a governance framework that allows the Administrator, government, civil society and other relevant stakeholders, as well as the public, to hold the signatories to their commitments. The governance framework, which will be framed in 6 months, will include provisions for:

  1. An Administrator: The Administrator will be an organisation agreed upon and appointed by the Signatories to oversee the day-to-day administration of the code. The Administrator will facilitate regular meetings, establish and facilitate the Complaints Mechanism (outlined below), publish an annual transparency report, collect and publish the reports filed by Signatories, engage and onboard new signatories, make binding decisions on the termination of a signatory, name individual Signatories for positive or negative progress, etc. The Administrator will be funded by Signatories, who have agreed to contribute annually.
  2. An Oversight Committee: The Oversight Committee will be comprised of a range of stakeholders, including representatives from the Signatories, Maori cultural partners, civil society and other relevant stakeholders such as the government and academics, who will meet annually to review how Signatories are meeting their commitments under the code. This includes assessing Signatories’ annual compliance reports, complaints submitted through the Complaints Mechanism, and the progress of the code
  3. A Complaints Mechanism: The Administrator will work with Signatories to establish a complaints policy,  violation definitions, and mechanism for addressing non-compliance by Signatories. The Complaints Mechanism will allow people residing in Aotearoa New Zealand to submit complaints against Signatories that they believe are in breach of the code. The Complaints Mechanism is expected to only deal with breaches of code and not other content-related issues. The Administrator will produce and publish an annual transparency report on the complaints received and responded to.
    • Redress for non-compliance: The Administrator will work with Signatories to establish the criteria for determining non-compliance and the appropriate redress mechanism for Signatories to respond to complaints. Signatories will be given a reasonable opportunity to respond to complaints but those who repeatedly fail to comply with their commitments under the code may be terminated.
  4. Annual compliance reporting: Signatories will each provide an annual report to the Administrator setting out the measures implemented and the progress they have made in relation to the expected outcomes, as outlined in the above section. Reports will follow the template provided in the code. The report will be published on a publicly accessible website maintained by the Administrator.
  5. Biennial review of the Code: The code will be reviewed by the Oversight Committee after it has been in operation for twelve months, and thereafter at two-yearly intervals. The reviews will be based on the input of the Signatories and other relevant stakeholders. Any changes or amendments to the code, resulting from the Review, will only go into effect after it is agreed upon by the Administrator and all Signatories.

Not all are happy with self-regulation

Certain interest groups in New Zealand want more details on sanctions for any failure by the companies to comply and about a mechanism for public complaints, Reuters reported. Furthermore, some are not happy that the code is being administered by an industry body, not the government. “This is a weak attempt to preempt regulation – in New Zealand and overseas – by promoting an industry-led model,” Mandy Henk, chief executive of Tohatoha NZ, said in a statement to Reuters.


This post is released under a CC-BY-SA 4.0 license. Please feel free to republish on your site, with attribution and a link. Adaptation and rewriting, though allowed, should be true to the original.

Also Read

Written By

Free Reads

News

With GPS in every car, India's toll collection is going high-tech.

News

Google is currently undergoing the necessary procedures for the leasehold land, which is presently under the ownership of the Maharashtra Industrial Development Corporation.

News

The pilot project for which is being launched in 19 cities this year, with a plan to launch a full-fledged rollout next year.

MediaNama’s mission is to help build a digital ecosystem which is open, fair, global and competitive.

Views

News

NPCI CEO Dilip Asbe recently said that what is not written in regulations is a no-go for fintech entities. But following this advice could...

News

Notably, Indus Appstore will allow app developers to use third-party billing systems for in-app billing without having to pay any commission to Indus, a...

News

The existing commission-based model, which companies like Uber and Ola have used for a long time and still stick to, has received criticism from...

News

Factors like Indus not charging developers any commission for in-app payments and antitrust orders issued by India's competition regulator against Google could contribute to...

News

Is open-sourcing of AI, and the use cases that come with it, a good starting point to discuss the responsibility and liability of AI?...

You May Also Like

News

Google has released a Google Travel Trends Report which states that branded budget hotel search queries grew 179% year over year (YOY) in India, in...

Advert

135 job openings in over 60 companies are listed at our free Digital and Mobile Job Board: If you’re looking for a job, or...

News

By Aroon Deep and Aditya Chunduru You’re reading it here first: Twitter has complied with government requests to censor 52 tweets that mostly criticised...

News

Rajesh Kumar* doesn’t have many enemies in life. But, Uber, for which he drives a cab everyday, is starting to look like one, he...

MediaNama is the premier source of information and analysis on Technology Policy in India. More about MediaNama, and contact information, here.

© 2008-2021 Mixed Bag Media Pvt. Ltd. Developed By PixelVJ

Subscribe to our daily newsletter
Name:*
Your email address:*
*
Please enter all required fields Click to hide
Correct invalid entries Click to hide

© 2008-2021 Mixed Bag Media Pvt. Ltd. Developed By PixelVJ