How to report fake or hate content at Facebook, YouTube and Instagram?

Facebook

Facebook maintains a user reporting system along with AI based hate content detection, internal reviewing, third party fact checking etc. As AI based hate content detection is growing, its user reporting system still plays a role to flag something that can potentially be a hate content. In 2017[1], only 24% of hate content has been detected by the AI system and the rest from the user reports. Today that AI based automated detection number has come up to 80.5 percent. User reporting still remains an important pillar to the process of eliminating hate content.

Facebook has detailed resources on reporting things[2] that are categorized in several issues

  • Reporting abuse, which includes reporting on content.
  • Reporting technical issues
  • Reporting privacy violation
  • Reporting about hacked and fake account

The reporting on abuse is directly linked to hate content or hate speech. This reporting could be on profile, posts (either on their wall or on your timeline), photos & videos, messages, pages, groups, events, comments, advertisements on Facebook etc. ‘The best way to report abusive content or spam on Facebook is by using the Report link near the content itself’[3]. Facebook also shares information to report if someone is not in Facebook or can’t see the post that is violating the standard[4]. This reporting process requires the users to give a bit of information as to how this profile, comment, post, messages, photos & videos violate the community standards and can provide screenshots, justifications etc. This also allows the users to submit a report to Meta but Facebook uses this feedback to build its vocabulary for machine learning on hate content.

When you report a profile, following options appears:

  • Pretending to be someone
  • Fake account
  • Fake name
  • Posting inappropriate things
  • Harassment or bullying
  • I want to help
  • Something else

Fake and hacked accounts are often associated with hate content creation, promotion and distribution. Facebook offers ways to report about fake or hacked accounts. There is a dedicated URL from Facebook, which users can use to secure their accounts by changing their password and reviewing recent login activity. This way you can also help someone secure his/her account, if they feel they don’t have any access to that account which has been hacked . Impersonation is another issue for which Facebook provides a way to report about that account or profile by clicking on the cover photo and selecting ‘report’. Impersonating happens when someone pretends to be you or someone you know. A fake profile is a little bit different from a hacked profile or impersonation. ‘A fake profile is a profile where someone is pretending to be something or someone that doesn’t exist’[5]. To report on fake profile, one has to Click  under the cover photo and select Find Support or Report Profile. Facebook also provides the option to report and to fill out a form if someone thinks any photo or video has violated his/her privacy.

Facebook claims to maintain the authenticity of individuals and encourages everyone to have only one personal profile as per its community standards. If someone wants to represent a business, organization, brand or product, then they can do so using the option of creating and managing a Page.

Even though this reporting remains the first line of defense against hate content/hate speech, Facebook reported[6] in 2019 that it is limiting scopes for users reporting with the intention that it would reduce burden on its content moderators. It also plans to close reports in cases where few people had seen the post or the issue reported was not severe. It also shares a datapoint where they identified that 75 percent of the time, reviewers found that hate speech that was reported did not violate Facebook’s community standards. Facebook’s point is they could rather be engaged in proactively looking for violations that are severe in nature.

As mentioned earlier, Facebook also has an ‘in risk’ country list with a different set of priorities, through which they try to contain hate content where it is in severe form. But even to take a country into a priority list, typically requires Facebook an year of time first to build classifiers and to improve enforcement. Therefore to build classifiers, Facebook relies on the issue of ongoing violence rather than a temporary violence.

YouTube:

YouTube officially allows multiple ways to report about its content. This reporting can be on a channel, a playlist, a thumbnail, a comment, a live chat message or even on an advertisement too.

To report[7] a video, channel, playlist or a comment: an user has the option to report about it from the cell phone (Android based or Ios based) and the website. YouTube reviews videos on an ongoing basis. For any reporting, you have to sign in to YouTube servies first. Beside each single video at the bottom right corner, there is a three dots (…) link where anyone can click to see ‘report’ option. Once you click there, a list of categories appears. Following is the list. An user has to decide which category it falls into before the reporting is made.

  • Sexual content
  • Violent or repulsive content
  • Hateful or abusive content
  • Harassment or bullying
  • Harmful or dangerous acts
  • Misinformation
  • Child abuse
  • Promotes terrorism
  • Spam or misleading
  • Infringes my rights
  • Captions issue
  • None of these are my issue

Hate content category is clubbed with abusive content. Hate content also has three sub-categories: promotes hatred or violence, abusing vulnerable individuals, abusive title and description. Once the sub-category is selected, the next step is to provide additional details and if possible with a time-stamp of the video where it exactly violated the community guideline. Reporting on the thumbnail is a bit tricky and requires you to hover the video, which will then pop up the video with an option for reporting it. It shows as reporting an image or the title. For cell phone based YouTube apps, usually this reporting feature is at the top right corner under the settings icon of that video. To report about an advertisement, the user has to point to the ad first to find out ‘Why this ad’ at the bottom left corner. Here you can find the option of reporting the ad. There is a small form to fill out before you can submit the report for reviewing.

In a report ‘Alternative and extremist content on YouTube’[8], authors argued that YouTube’s architecture has an issue that supports the growth of hate content on this platform. First of all, YouTube is a user driven platform, that allows anybody to upload or post anything and let extremist and hate filled content compete with news and information based established facts. Second, its financial rewarding model based on viewership and watch time may encourage creators to make things more juicy and appealing to others.Third, YouTube’s algorithm provides recommendations of videos based on past behavior. But there is a limitation. This in turn can set up the user behavior because top recommendation often plays out after the current video is concluded. Fourth, video is a time consuming media and reviewers would need more time to review reading the title, transcription and the presentation. And fifth, with frequent viewing of  certain types of videos, the user can feel or establish a connection with people that they frequently watch and thereby can potentially expose themselves to the effects of such videos.

Despite these architectural shortcomings, one of the issues is to have the reporting process seamless and free of complicacy. It requires more reviewers with local knowledge and understanding of the local context. Back in 2017[9], YouTube hired thousands of internal reviewers to remove videos that violated its community guideline and to support the machine learning processes to spot the troublesome videos. But during the pandemic, in March 2020[10], YouTube sent many of its content moderators at home and relied more on automated and AI based filters. Interestingly, the number of videos that were removed doubled in the second quarter of 2020.

Instagram 

Instagram allows reporting a profile or a content on Instagram that doesn’t follow community guidelines. But for that one needs to have an Instagram account and following reasons to report about 

  • Spam
  • Nudity or sexual activity
  • Hate speech or symbols
  • Violence or dangerous organizations
  • Bullying or harassment
  • Selling illegal or regulated goods
  • Intellectual property violations
  • Suicide or self-injury
  • Eating disorders
  • Scams or fraud
  • False information

Apart from that, one can report a profile for posting content it shouldn’t be, pretending to be someone else, or for being a child under the age of 13. All reporting is kept anonymous, except if someone is reporting on intellectual property right violation. The account on which the reporting is being made won’t see who has reported them. 

Anyone can report a post or a profile using either their cell phone or computer. For reporting a post, a user would need to find a post and tap on three dots (…) either above or top right corner of the post. Once clicked, an option for ‘Report’ will pop up from which an on-screen instruction can be followed to report a content. To report a profile, you can tap the username from the Feed, story post or from your chat with them. Anyone can also tap and search the username to go to their profile. Three dots (…) will appear at top right or above the profile. There you have to tap ‘Report’ and follow the onscreen instructions. If someone doesn’t have an Instagram account, they still can report abuse, spam or anything else that doesn’t follow the Community Guidelines using this form

Apart from the reporting process, Instagram has rolled out a feature to prevent users from viewing abusive messages by filtering offensive content. It also makes it harder for people blocked by users to circumvent and contact them through new accounts. The filter, which can be activated on Instagram in privacy settings, can be customized by users to include words, phrases and emojis that they wish to block or avoid receiving in their message requests. Users can use their discretion to report, delete or open messages, which will be sorted into a hidden requests folder. In Instagram, users can also set their accounts to “private” mode, meaning that only approved followers can see the content that is posted on that user’s page — this makes it harder to regulate the content posted on private accounts.

  • How do I report a post or profile on Instagram?

https://help.instagram.com/192435014247952/?helpref=search&query=how%20to%20report&search_session_id=d844f149ff5b807dcc8ea576401edcce&sr=1

  • How to Report Things

https://help.instagram.com/2922067214679225/?helpref=search&query=how%20to%20report&search_session_id=d844f149ff5b807dcc8ea576401edcce&sr=0

5/?helpref=search&query=how%20to%20report&search_session_id=d844f149ff5b807dcc8ea576401edcce&sr=0


[1] https://ai.facebook.com/blog/how-ai-is-getting-better-at-detecting-hate-speech/

[2] https://www.facebook.com/help/1380418588640631/?helpref=related_articles

[3] https://www.facebook.com/help/1380418588640631/?helpref=related_articles

[4] https://www.facebook.com/help/1723400564614772

[5] https://www.facebook.com/help/1216349518398524/?helpref=hc_fnav

[6] https://www.theverge.com/22743753/facebook-tier-list-countries-leaked-documents-content-moderation

[7] https://support.google.com/youtube/answer/2802027?hl=en&co=GENIE.Platform%3DDesktop&oco=0#zippy=%2Creport-a-video%2Creport-a-playlist%2Creport-a-thumbnail

[8] https://www.adl.org/resources/reports/exposure-to-alternative-extremist-content-on-youtube

[9] https://www.nytimes.com/2017/12/04/technology/youtube-children.html

[10] https://unesdoc.unesco.org/in/documentViewer.xhtml?v=2.1.196&id=p::usmarcdef_0000377720_eng&file=/in/rest/annotationSVC/DownloadWatermarkedAttachment/attach_import_442239bd-387e-47f8-aa1b-4a98fe8e2e5f%3F_%3D377720eng.pdf&locale=en&multi=true&ark=/ark:/48223/pf0000377720_eng/PDF/377720eng.pdf#Plataformas_Discurso_Odio_Ingles.indd%3A.11061%3A135

Link and share