As many as 11 prototypes of apps that concentrate on creating a safer online experience for women, were unveiled at a World Wide Web Foundation-organised series of workshops on policy design. The apps addressing online gender-based violence were created in the workshops consisting of 20-25 participants, including 2-3 representatives from each tech company from both product and policy teams, the non-profit organisation stated in a report released on June 28.
Built on fictional platforms, the participants’ prototype apps were based on a set of personas that “aim to reflect the experiences of highly visible women online from around the world, whilst recognising that no set of personas can fully capture the complexities of those experiences, specific identities or contexts,” the report added.
According to the Web Foundation, the policy designs of prototypes revolved around two major themes:
- Curation: Focused on giving women more control and choice over what they see online, when they see it, and how they see it.
- Reporting: Focused on improving the processes through which women report abuse.
Here’s a list of all 11 prototype apps as described by the Web Foundation:
Calm The Crowd
Type of abuse: Calm The Crowd has been built primarily for users who have been the targets of an online mob.
How it is addressed: When the prototype detects a spike in abuse, it nudges users to check their granular control settings. These settings help the user control who can see, share, comment, or reply to posts. Further, it also provides users with the ability to create their own keyword filters for replies and comments that they don’t want to view.
Type of abuse: It is designed for users who are receiving hurtful and abusive messages as a result of their posts going viral.
How it is addressed: Intended primarily for video sharing sites, the prototype allows users to tap into Viral Mode whenever their posts are getting too much attention from other accounts. This feature equips the user with options to turn off comments and downloads. It also includes a ‘cooling-off period’ toggle button.
Type of abuse: The prototype is built for users who experience a broad range of hurtful and abusive messages in their comments but are unable to report it all on their own.
How it is addressed: Using Com Mod, a person can delegate the responsibility of reviewing and flagging abusive posts to trusted contacts or communities. It also comes with more granular user control settings that can be customised to every post or for a specific amount of time. This way, the user who is facing abuse is not overwhelmed by constantly blocking and muting accounts.
Type of abuse: Image Shield caters to users who fear being identified in videos or images that were posted or shared by other accounts without their knowledge.
How it is addressed: When Image Shield recognises a user in a video or image posted by an external account, it notifies the user and gives them three options: to review the content, ask a friend to review it, or dismiss the notification. They can also collect and archive any flagged content with a date stamp, platform, name, and flag filter.
Type of abuse: The prototype app is designed for users who feel exhausted by reporting online abuse or find that the options they’re given when making a report don’t usually reflect their experience.
How it is addressed: In Reporting 2.0, the user can hover over a particular category of abuse such as hate speech and a short explanation of hate speech as well as community guidelines pertaining to it pops up on the screen. This enables the user to report the content according to the company’s guidelines. Users can also file a report in the original language of the abuse
Type of abuse: An app that is designed for users who are unfamiliar with how to flag content properly or never hear back from the platform once they have reported the content.
How it is addressed: Through Report Hub, the user is allowed to track the status of all their reports on a dashboard or a timeline with key milestones such as ‘report made’, ‘report under review’, ‘review complete’, and ‘decision appealed’.
Type of abuse: The prototype helps those users who feel that their reports of abusive content must be accompanied by local cultural and political contexts.
How it is addressed: This reporting dashboard provides specific prompts for users based on the category of abuse so that they can provide the context and information needed for the platform to respond more effectively. Users also have the option to specify if the report is being submitted in the same language as the abusive post. Additionally, a toggle button gives users control over whether they want to see the contents of the flagged post or not.
Type of abuse: One Click is for users who can anticipate being targeted by a social media pile-on and would like to get ahead of it.
How it is addressed: The prototype lets users set a time-limited safety mode that can be easily toggled with one click. Safety mode features include disabling comments or activating a ‘delay period’ for comments, activating a profanity or keyword filter, flagging keywords, and disabling tags.
Type of abuse: GateWay addresses online abuse that is made in the form of defamatory or gender and identity-based attacks. It caters to users who struggle to balance their safety with their commitment to the causes that they are passionate about.
How it is addressed: Through GateWay, users who are being attacked can send alerts to platforms. Users who are frequently targeted can also apply for protected status. The prototype also facilitates easy access to trusted and verified Civil Society Organisations to seek support in handling online abuse.
Type of abuse: This prototype is meant for users who receive loads of hateful comments but don’t know how to respond to it.
How it is addressed: iMatter does things differently by hosting a chat interface and chatbots which support users through the reporting process. The chatbots also offer users community support and check-ins with a psychologist. After the abuse is being reported, iMatter follows up the conversation to inquire about the user’s health and how they are doing.
Type of abuse: An app that is specifically designed for users who feel at risk because of their online experiences of abuse focused on their personal characteristics and perceived lack of competence.
How it is addressed: Users can carry out a risk assessment of the threats that they are facing by answering a few short, multiple-choice pop-up questions. After completing the assessment, the results indicate the level of risk a user’s profile is in at a given time, using indicators that are set by the user.