Content Moderation

Challenges in Content Moderation: How Computer Vision Can Help
BLOG

Challenges in Content Moderation: How Computer Vision Can Help

Reading Time: 4 minutes

TLDR: Content moderation is a popular topic at the moment and there are many challenges it raises for corporations around the world. From employee safety to moderation on a global scale, computer vision may have the answer a lot of businesses are looking for.

Challenges in Content Moderation: How Computer Vision As A solution? 

Content moderation is a hot topic, being covered regularly in the news. Governments worldwide are asking (and often demanding) that private companies take affirmative action to prevent the circulation of offensive or harmful content. In recent years, with the issue of misinformation and the small matter of many politicians’ caustic use of social media, everyone from public policy experts to every day (and every topic) opinionators have something to say about the subject. However, what is not often addressed is how challenging content moderation can be for these businesses in a way that not only protects users but also avoids alienating them and hampering their ability to do business. 

This particular problem surfaces when moderators are tasked with reviewing visual content (whether image or video-based). Most of it is offensive, doing little more than frustrating and angering the viewer. But some of it is so disturbing that viewing it can make people feel physically sick and cause deep and long-lasting emotional damage. The real challenge is, therefore, to find ways to manage this content without exposing humans to it, thereby allowing fast and efficient containment of this ever-growing volume of content and also protecting moderators from its effects.

“You’re not doing enough”, “No, you’re doing too much” 

Whether we’re looking at social media websites or print-on-demand companies, private businesses are often pulled between two parties. On one side, users are demanding that online platforms take more responsibility for publishing and exposing users to hateful, offensive, or harmful content. On the other, you have a group of people who claim content moderation poses a risk to people’s freedom of expression. There is also an argument that in many cases, platforms don’t act quickly enough on some offending content; this came to light a lot in recent years with misinformation and racially and sexually insensitive content often remaining online long enough to go viral, only to be removed by the hosting party too late. 

But that’s only where the challenge begins. Taking a close look at the limitations of content moderation shows that there is much improvement needed in particular areas. 

employee safety - content moderation at work  challenges

1. Employee Safety 

The first topic that springs immediately to mind in relation to the challenges and risks involved in content moderation is the safety and wellbeing of employees. 

Content moderators spend hours viewing content for their employers, much of it disturbing. In recent years, both YouTube and Facebook have been sued by their content moderators who experienced PTSD symptoms and depression as a result of their work. As content moderation is a relatively new career with a notably high turnover rate, it could be years before we see the true impact this work has on the safety and wellbeing of people. 

Computer Vision can play an incredibly important role in protecting employees in content moderation for distressing and emotionally disturbing content by automatically blocking or removing it. This will leave humans to moderate videos and images that are at the fringes of acceptability and so are unclear as to whether or not they breach platform rules. 

2. Removal of Safe Content 

Mark Zuckerberg admitted in 2020 that 300, 000 content moderation mistakes are made every day on Facebook. We’ve all seen someone on our feeds saying they had been in “Facebook Jail” after posting something innocuous. If moderators are being faced with 3 million videos and images to moderate every day, it’s more surprising that this number isn’t higher. 

The removal of safe content not only causes reputational damage to your company but also creates a frustrating experience for the user, reducing their likelihood of returning to your platform. Computer Vision takes the human error out of the mix, resulting in fewer mistakenly removed posts and reducing user friction. 

Live-Laugh-obscured

3. Missing obscured offensive content 

Similarly, humans are more likely to miss offensive content, particularly if it is obscured. For example, if words are jumbled or blurred, they may not be flagged. However, Visual-AI can see similarities and misspellings and some obfuscation techniques, making it easier to block or flag the content. 

4. New Risks

New risks arise in the realm of content moderation all the time. More than a decade ago, for example, Pepe the Frog was a ubiquitous reaction meme used on every social platform, and now it’s a hate symbol with links to antisemitism and nazism. 

Computer vision can be taught to recognize new hate symbols and other similar imagery in an instant, while it could take significantly longer to teach content moderation teams about these new themes and tropes. 

5. Volume

An obvious challenge for any content moderation team, particularly at bigger companies, is the volume of content that needs to be reviewed. There are literally billions of images and videos posted online every day and while not all of them will need to be manually reviewed, an enormous amount of them do. 

High volumes of content increase the risk of error, whether that’s erroneously labeling something like a breach of regulations or letting something disturbing slip through the net. Computer Vision can handle massive volumes with more accuracy, leaving your team to handle more ambiguous content. 

6. Global Content Moderation

Companies that operate on a global scale may find moderation particularly difficult as different countries and cultures have varying views on what is offensive and what isn’t. 

Without computer vision, this could mean having multiple teams to deal with different regions or perhaps even requiring one team to know the regulations of each country. Time-consuming, confusing, and leaving so much room for error, right? Computer Vision can be trained to know the different moderation parameters of each relevant region and targeted culture or age group, meaning that there is a lower likelihood for mistakes. 

Want To Talk? 

Computer vision is an essential asset to any company that enables user-generated content. The protection of users, the company’s integrity and its employees are all high on the list of priorities for these companies, and computer vision is an essential and excellent way of ensuring it. 

VISUA is ready to talk to anyone who is looking for new ways to improve their business’ content moderation outfit. Fill in the form below to get started and run a live test!

Book A Demo

RELATED

BLOG BLOG
Eight Types of Content Intermediary Web Companies Need to Detect & Block to be Compliant with the European Digital Services Act

Reading Time: 7 minutes The European Digital Services Act is a groundbreaking piece of legislation that aims to modernise the regulation of online businesses to better […]

Brand Protection Content Moderation Counterfeit Detection Trademark Protection
BLOG BLOG
Eight Types of Content that Marketplaces & Ecommerce Sites Need to Block to be Compliant with the European Digital Services Act

Reading Time: 6 minutes The European Digital Services Act is new legislation which aims to modernise the regulation of online businesses to better protect its citizens […]

Brand Protection Content Moderation Counterfeit Detection Trademark Protection
BLOG BLOG
Infographic: 8 Types of Content Intermediary Web Companies Need to Detect & Block

Reading Time: < 1 minute The EU’s new Digital Services Act will, for the first time, hold online intermediary companies accountable for allowing harmful or illegal content […]

Brand Protection Content Moderation Counterfeit Detection Trademark Protection

Trusted by the world's leading platforms, marketplaces and agencies

Integrate Visual-AI Into Your Platform

Seamlessly integrating our API is quick and easy, and if you have questions, there are real people here to help. So start today; complete the contact form and our team will get straight back to you.

  • This field is for validation purposes and should be left unchanged.