Content moderation is at crisis point. People employed to carry out the task of moderating are exhausted and traumatized while the systems are often failing to stop offensive material from being published, and just as often they remove or ban content that is actually above board.
As so much of the content we consume today is visual in nature, many of the challenges that arise in the arena of content moderation can be overcome with Visual-AI, but text detection can address some specific challenges.
Whereas traditional text detection is about detecting banned terminology in typed out posts on social media. in the context of computer vision, text detection allows the analysis of images and videos to detect and understand text embedded in the media.
Text detection is often referred to as Optical Character Recognition (OCR), however, text detection is a term that better describes how the technology works in the context of image and video analysis. OCR focuses on the conversion of text in documents into an editable format, whereas Text Detection focuses on snippets of text embedded in images and videos and extends to understanding phrase structure and context.
It can be used individually or alongside other Visual-AI technologies in the processing of media in social media posts, ecommerce sites/marketplaces, electronic messages and so on. It plays an integral role in brand monitoring, counterfeit detection, phishing detection and, of course, content moderation.
Text detection for content moderation is applicable to a variety of industries, enabling moderation systems to see more than they traditionally have.
The most significant role Text Detection plays is ensuring that infringing, offensive, harmful and hateful content does not make it onto your platform; this is becoming an increasingly important thing for platforms of all kinds to consider.
Let’s take a look at where text detection may come into effect with various platforms.
Teespring is one of the most renowned and widely used print-on-demand websites worldwide. Despite having provisions in place, they still became known as the supplier that unwittingly provided the racist and anti-Semitic t-shirts to far-right extremists who descended on Capitol Hill on January 6th 2021.
This has become a cautionary tale of enormous
proportions to all print-on-demand businesses, particularly as supposed ethical hacking resulting in user data releases, and numerous petitions to boycott and even take the website offline.
If only Teespring and other sites like it that found themselves in the mix had implemented a moderation system that prevented designs with offensive and harmful words included on them from being uploaded in the first place. While users can disguise their uploads in the product titles and descriptions by leaving out trigger words, they can’t get passed Visual-AI.
Computer Vision can ‘read’ words on uploaded designs even if they aren’t in simple print, halting the upload process and flagging it with the content moderation team.
2. Social Commerce/Ecommerce
Marketplaces and social commerce facilities face almost identically challenges to print-on-demand businesses. With users granted the ability to upload their own products for sale, there is a likelihood that some offensive, harmful and oftentimes illegal items wil be listed on the website.
Etsy was forced to remove upsetting products referring to Auschwitz following the Capitol Hill riots. It is known to still host products that push a far-right agenda and “ultra MAGA” items, despite apologising for the Capitol Hill t-shirt debacle. Once again, this is a situation in which computer vision, specifically text detection, could save Etsy’s brand management, PR and legal teams a lot of heartaches and hard work. A text detection module in their content moderation system can flag these and prevent them from being listed immediately. This will prevent further reputation damage and perhaps even prevent any legal cases in the future.
3. Gaming platforms and online games
Gaming platforms have two areas in which users may be at risk of being exposed to offensive or harmful content. The first in the gameplay itself on platforms or within games where gamers have the ability to create their own clothing, skins and other items. Uploaded images emblazoned with text may slip through content moderation systems unless text detection is specifically incorporated. The second area in which gamers may be exposed to upsetting or offensive content is through messaging facilities on the platforms. Although not available on all gaming platforms, many provide the ability to message other users and even send images. While a content moderation system that includes Visual-AI elements such as object detection will spot obvious items that will offend, it will miss any image that has harmful or upsetting text emblazoned on it. With many gaming platforms aimed at children, this is particularly important to police. Text detection can analyse images sent in real-time, ensuring that the message is blocked before it even arrives in the intended recipient’s inbox.
4. Social Media
Social media platforms are probably the first thing most people will think of when they hear the words “content moderation”, and with understandable reason. It is safe to say that the majority of teens and adults in the western world use social media every single day to express themselves, engage with like-minded people and keep up to date with loved ones around the world. It has revolutionized how we consume media and how we connect with other people for better, but also for worse.
Text detection for content moderation has the potential to be game-changing for social media platforms in a number of ways. Most notable is how it can help prevent online abuse and bullying through the medium of posting and messaging images and videos with offensive text on them.
While obviously offensive comments and messages will be flagged by most social media platforms either with the user to ensure they mean to send the message or with the content moderation team, when it comes to images and videos with harmful text embedded in them there seems to be an ongoing challenge. Some malicious users will obscure part of a word to avoid immediate flagging, but the intended message and harm is still obvious. Text detection can analyze images and videos, spotting visually similar words that may be obscured and preventing them from being posted. This approach could be revolutionary in ensuring a safer place online for internet users.
Some social media platforms may be concerned about preventing users from expressing themselves by implementing this kind of technology however, it can work within their own parameters.
At VISUA, we know that every case is unique and discussing how this kind of technology will work within your organization is imperative to make sure it’s the right fit for your needs. Get in touch with us via the form below and we can arrange a time to discuss your project in detail.Book A Demo
Reading Time: 5 minutes Zooming in on object detection for content moderation Object detection has the potential to be one of the most powerful tools that […]Content Moderation
Seamlessly integrating our API is quick and easy, and if you have questions, there are real people here to help. So start today; complete the contact form and our team will get straight back to you.