Content Moderation

The Real Challenge in Content Moderation
BLOG

The Real Challenge in Content Moderation

Reading Time: 6 minutes

TLDR: The real challenge in content moderation is not in how time consuming it may be, or even in how much of modern technology miscategorizes content from time to time, but in how it impacts the humans who do the work. Content moderation is essential in protecting users, but it can cause a lot of suffering for the people carrying out the work.

At first glance, being employed to moderate content might seem like the ideal job. After all, spending your day looking through Facebook posts might not seem like work to many people.

However, the truth is somewhat different, and this article sets out to look at why visual content moderation is a major challenge that comes at a great cost and has left many moderators suffering from mental health difficulties, including PTSD.

Why is content moderation essential?

Some arguments claim that content moderation in any form goes against the fundamentals of free speech. But this is a very black and white argument in a very gray area.

Put simply, content moderation is essential to keep platforms free from offensive, threatening, violent, illegal, and pornographic posts. Illegal content, including child pornography and videos depicting gratuitous violence, is being uploaded daily, and this is an obvious area where content moderation is essential.

At the other end of the scale, there are massive amounts of visual content that are illegal because of copyright reasons. With the sheer amount of content being uploaded daily, content moderation is critical. Leaving this as a “free-for-all” would result in social media feeds full of horrific content, personal bullying, and the internet awash with videos and images that infringe on copyright.

Visual Content ModeartionBe Sure To Also Read ->Visual Content Moderation Use Case Overview

Content moderation challenges

The biggest content moderation challenge facing platforms can be summarized in one word – Scale. Social media and content platforms like YouTube have to police millions of posts per day.

Every minute of every day across thousands of platforms, millions of images and videos are uploaded. The massive majority of these are the entirely innocent sort of content that billions of us scroll through daily in our social media feeds. But a small percentage of these are not so innocent. And it is worth reiterating that even a small percentage of the billions of daily uploads equates to a substantial number.

To get a better idea of the size of the problem that platforms face, consider this fact – According to Business Insider, Facebook has 350 million images uploaded to its platform daily. It is easy to dismiss content moderation as something that uses complex algorithms to streamline the process. Using AI here is certainly an essential step. Moderating 350 million images per day is simply unfeasible for humans.

However, this is still very much a process that depends on a human to make the final judgment, and arguably this is where the true cost of content moderation is to be found. To be clear, content moderation is not a case of sifting through millions of puppy images looking for copyright violations. For those employed as content moderators, the truth is far more sinister.

Content Moderation Dashboard

Moderation Case Example – Facebook

Looking at how a platform like Facebook tackles the moderation problem gives a great insight into the problems that all “mega-platforms” face when attempting to keep content within their guidelines. According to a report by NYU Stern, Facebook’s “human moderators” review over 3 million images, videos, and posts daily. To achieve this, Facebook employs over 15,000 moderators, apply arithmetic, and this means that every moderator examines 200 posts in an 8-hour shift or a post every 2.5 minutes. This might not sound too bad, but remember that many of these posts will include videos that easily last longer than this.

This means it is often the case that moderators are left with a few seconds to make snap judgements on other posts. Working under such pressure, mistakes are frequently made. In a recent white paper, Facebook CEO Mark Zuckerberg admitted that moderators made the wrong decision about 10% of the time. This means that 300,000 times per day, posts are incorrectly moderated.

What constitutes unsuitable content?

One of the problem areas is making a judgment on what can be interpreted as unsuitable content. This isn’t as simple as it sounds. There are obvious cases like child pornography and terrorist videos or images. However, these are obvious instances, and not everything is so clear cut. Part of the problem is geography. Facebook outsources much of their moderation to third-party vendors who, in turn, employ low-cost labor, mostly from countries like the Philippines and India.

This means that the split decisions that these moderators have to make are based on criteria that are often ambiguous and from separate cultures that apply different standards as to what is allowable content.

The human cost of moderation

For the moderators, working under such pressure for basic wages has left many suffering from mental health problems and burnout. Bad enough on its own, but there are also growing reports of moderators suffering from PTSD-like symptoms.

This is better understood when the type of content that these moderators are faced with daily is examined. According to the NYU Stern report, for the first quarter of 2020, the statistics for the numbers and types of posts removed are as follows:

Type of Content Removed

Type of Content RemovedNumber (in millions)
Adult nudity and sexual content39.5
Content with graphic violence25.5
Terrorist and organized hate11
Hate Speech9.6
Drugs and Firearms9.3
Child sexual exploitation and nudity8.6
Harassment and bullying2.3
Self-injury and suicide1.7

These are staggering figures. Each quarter, moderators are subjected to 107.5 million posts that the overwhelming majority of us would find absolutely horrific.

The mental health problems facing moderators aren’t just a case of a few isolated instances either. The scale of the problem is highlighted by a recent class action raised against Facebook. In this action, a judge in California approved an $85 million settlement for a class of more than 10,000 moderators who had claimed Facebook had failed to protect them against psychological injuries resulting from their exposure to such gruesome content.

It was argued by the plaintiffs that Facebook provided inadequate training for moderators and had failed to put sufficient safeguards in place to limit exposure to such graphic content.

The deal includes the setting up of a fund worth $52 million for ongoing mental health treatment for members of the class action. While Facebook did not admit to any wrongdoing as part of the settlement, it did agree to provide safer working environments, including enhanced review tools and greater emphasis on coaching sessions with mental health counselors.

But this isn’t a perfect solution. At the end of the day, exposure to such graphic content on any scale has to have adverse effects on the people involved.

Human content moderation

Content Moderation – What next?

The key takeaway from this is that the human cost of content moderation is already too high, and with each passing year seeing dramatic increases in the amount of content requiring moderation, the situation cannot continue.

The use of Artificial Intelligence is the obvious route, but this isn’t as simple as it first sounds, particularly when considering the problem of moderating visual content. Moderating text content is relatively easy for AI. Words are regular patterns that are easily understood by computers, the only problem is context. Videos and images are far more difficult. Consider an example of someone posting a video of their two dogs “pretend fighting” in a field. This is easily identified as such by computer vision. However, someone that posts videos of organized and illegal dogfighting could look incredibly similar.

But the technology is slowly getting to grips with the problem. As this detailed Visual Content Moderation article explains, although it isn’t a flawless system, computer vision is rising to the challenge of moderating visual content.

Lessons learned from the vast number of human moderators have been incorporated into the AI algorithms, and with many moderators losing their jobs during the recent pandemic, AI has been filling the void with increasing success.

Computer vision has come a long way in a few short years and is now dramatically increasing the number of infringing posts being taken down and the speed at which they are removed from platforms. Although at least for the foreseeable future, there will still be some level of human moderation, the use of computer vision can greatly reduce the number of harrowing and horrific posts moderators are exposed to.

Want to talk?

VISUA operates at the very forefront of the AI revolution that is transforming the way visual content is moderated. Our platform deals directly with many of the issues described in this article and is helping to negate the need for human moderators to be confronted with horrific images and videos.

To learn more about the technology that is driving this revolution, check out our video about the subject here, or contact us directly by filling out the form below.

Book A Demo

RELATED

BLOG BLOG
Eight Types of Content Intermediary Web Companies Need to Detect & Block to be Compliant with the European Digital Services Act

Reading Time: 7 minutes The European Digital Services Act is a groundbreaking piece of legislation that aims to modernise the regulation of online businesses to better […]

Brand Protection Content Moderation Counterfeit Detection Trademark Protection
BLOG BLOG
Eight Types of Content that Marketplaces & Ecommerce Sites Need to Block to be Compliant with the European Digital Services Act

Reading Time: 6 minutes The European Digital Services Act is new legislation which aims to modernise the regulation of online businesses to better protect its citizens […]

Brand Protection Content Moderation Counterfeit Detection Trademark Protection
BLOG BLOG
Infographic: 8 Types of Content Intermediary Web Companies Need to Detect & Block

Reading Time: < 1 minute The EU’s new Digital Services Act will, for the first time, hold online intermediary companies accountable for allowing harmful or illegal content […]

Brand Protection Content Moderation Counterfeit Detection Trademark Protection

Trusted by the world's leading platforms, marketplaces and agencies

Integrate Visual-AI Into Your Platform

Seamlessly integrating our API is quick and easy, and if you have questions, there are real people here to help. So start today; complete the contact form and our team will get straight back to you.

  • This field is for validation purposes and should be left unchanged.