Brand Protection Content Moderation Counterfeit Detection Trademark Protection

Eight Types of Content Intermediary Web Companies Need to Detect & Block to be Compliant with the European Digital Services Act
BLOG

Eight Types of Content Intermediary Web Companies Need to Detect & Block to be Compliant with the European Digital Services Act

Reading Time: 7 minutes

The European Digital Services Act is a groundbreaking piece of legislation that aims to modernise the regulation of online businesses to better protect its citizens from any form of harmful content. The DSA has been designed to have better enforcement of companies to stop content that enables online bullying & harassment, hate speech, disinformation, and the widespread sharing of illicit content, not to mention blocking counterfeiters that list fake and infringing goods.

However, for the first time, companies providing intermediary services for other online businesses are included in this legislation. The definition of ‘intermediary’ companies is quite broad and includes everything from hosting companies, domain registrars, and caching & content delivery network systems, to direct messaging services, virtual private networks, and voice over IP providers, and even website content management systems and email/survey platforms.

Not only can companies and users be able to seek compensation from providers of intermediary services for any damage or loss suffered due to an infringement of the DSA, but governemt agencies in each EU territory will be able to impose fines of up to 6% of annual turnover where these companies have been found in breach of the legislation.

So, here are a number of things which marketplaces need to prime themselves to detect and block in order to ensure compliance with this new act.

Click to download this useful infographic

1) Counterfeit Goods

Counterfeit detection - Dove beauty bar

Intermediary companies will have a duty to ensure that they do not host any content relating to the promotion or sale of counterfeit goods, ensuring that consumers will be protected, not just from being conned into buying fakes, but also from potential harm.

There are numerous ways in which counterfeit goods can be detected. Logo detection, text detection and visual search are all computer vision tools that can work to detect logos, markings, specific imagery, product designs, labels, serial numbers and more.

1) Brand Impersonation

Email systems, website content management systems, survey platforms, hosting companies and even domain registrars can all be exploited by bad actors, allowing them to create, host and disseminate social engineering and phishing attacks. Bad actors will use the logo of well-known companies and service providers, along with other graphical techniques in emails and web pages, knowing that, until this legislation came into force, intermediary companies were not obligated to do anything about it.

3) Illicit Content

The case for moderation is never so obvious than when it comes to illicit content containing nudity or sexual acts. This is not only to protect unintended viewers, but also the subjects of the video, who have had content recorded or posted without their knowledge or hacked from their devices.

Ultimately, this is a very complex area with many platforms struggling to differentiate between artistic works and gratuitous content, and even between nudity and helpful public service content (e.g. how to do breast or testicular exams, or content that highlights recovery from breast cancer).

Because of this, it is a prime example of how humans cannot manage it alone (there is simply too much content to review) and AI alone cannot solve (there are too many edge cases). But the combination of computer vision with a solid human moderation team is the perfect solution, as Visual-AI can eliminate the obvious cases that need to be blocked while surfacing the content that requires human scrutiny.

4) Violent Content

Managing content containing violence of any kind is another incredibly difficult challenge. Not only must we protect the public, but the impact of watching this material on human moderators is often overlooked. Cases of PTSD and high staff turnover makes consistency in moderating violent content even more of a challenge, and platforms increasingly face legal battles as moderators claim a lack of duty of care by the platforms they work for.

Of course, if the platform is targeted at vulnerable users (like minors) then a blanket ban on violent themes can be a great deal easier to implement. But even here, there can be difficulties. A video about cooking will invariably contain knives and educational content about safe gun use will contain guns, so any automated system needs to be more nuanced than simply blocking any content containing those objects.

In other cases, fictional content is posted, so a distinction between fiction and real-life violence must also be determined. Yet the internet and social media is awash with sometimes horrifically violent content that should, rightly be removed.

The answer therefore, just as with Illicit content, is to combine advanced computer vision, specifically trained to combine multiple factors in determinations (to reduce false positives), with human moderation to make the final decision on the more challenging examples. This combination can immediately eliminate the most obvious and gruesome content, so humans never have to see it, while highlighting the more challenging examples that humans need to deal with. In this way platforms and especially intermediary companies (who have not had to deal with this type of content before) can show that they are taking every action to protect citizens and their moderators.

Visual Content ModeartionBe Sure To Also Read ->Visual Content Moderation Use Case Overview

5) Hate Speech

Extremist group - visual content detection

Whether it’s content aimed at gaining support for terrorism, gender, sexual orientation, or any other form of hate speech/bullying, citizens should be protected and perpetrators sanctioned and, where appropriate, prosecuted.

Much of the hateful content created contain easy-to-spot visual themes, but the volume of content plus the fact that they can be ‘hidden in the weeds’ make the task onerous.

We have also products, like tshirts, related to far-right hate groups, sold on marketplaces. Indeed, companies like Etsy have been slammed in the media with social media users calling for boycotts against them for not policing their listings better. The availability of such items on these websites does, after all, enable people to illustrate their hateful messages not just on social media but in innocuous moments as they go about their daily business. The widespread availability and easy access to these kinds of items normalises hate speech and helps those with bad intentions to make a fashion statement of it.

The responsibility not only falls on marketplaces, but web hosting providers, social platforms and even cloud storage operators to prevent the sale of items and dissemination of content with such obviously hateful messages and imagery.

6) Illegal And Controlled Products/Substances

Platforms and companies that don’t remove content related to illegal drug use and the promotion of controlled drugs will face the consequences once the Digital Services Act comes into force (Intermediary companies included).

The issues around illegal drugs is obvious, but less obvious is the massive damage done by the sale of counterfeit controlled drugs, which are responsible (both directly and indirectly) for thousands of deaths every year.

Computer Vision can be trained to identify and highlight content containing drugs, drug paraphernalia and well-known drug brands and names, allowing companies to block said content and protect EU citizens.

7) Infringing Use Of Protected Imagery/Artwork/Icons/Logos

Whether it’s creative/artistic art or photography made by an independent artist or larger scale companies, every image deserves to be protected from misuse, which thereby protects the artist/company/brand from lost revenue and reputational damage.

This means that sellers on marketplaces and other types of companies that gain from the use of the imagery need to have permission to use works before using it. It also means that counterfeiters and other bad actors should be stopped from using the imagery for their benefit.

So, intermediary companies will need to apply computer vision tech to detect this type of visual content so they can take appropriate action (dependent on the use and context of the content).

8) Disinformation

misinformation

Disinformation and misinformation is a scourge on society. Of course free speech is a right and everybody can have their opinion on any given issue, but when that opinion is promoted as truth, with purposely distorted or fabricated data, then that content must be blocked.

Computer Vision on its own cannot make the determination, but it can help to surface the content that humans can then make balanced decisions on and this will help to dramatically simplify and lighten the load for intermediary companies faced with this task.

Computer Vision is The Answer

Dealing with all these different types of content in an ever more visual world can be quite overwhelming to consider. However, Computer Vision technology can be instrumental to making what seems an insurmountable task quite achievable.

There are a number of elements to computer vision that will enable any moderation technology to detect all of the above, as well as any other challenging visual content. Logo & mark detection, object & scene detection, as ell as visual search and text detection cover all the ground you need in order to ensure that your systems are fully compliant with the Digital Services Act when it comes into effect.

Implementation can be actioned through a simple API, but importantly, some companies don’t want to or need to integrate the results of processed media into their own software. For these companies, they have the option of both Low-Code or No-Code implementations.

Ultimately, even with computer vision technology and following best practices, it will be nearly impossible to catch every infringement, however, a company’s ability to show that they make every effort, using the best technology at hand is often enough to avoid legal repercussions and penalties.

Book A Demo

RELATED

BLOG BLOG
Eight Types of Content that Marketplaces & Ecommerce Sites Need to Block to be Compliant with the European Digital Services Act

Reading Time: 6 minutes The European Digital Services Act is new legislation which aims to modernise the regulation of online businesses to better protect its citizens […]

Brand Protection Content Moderation Counterfeit Detection Trademark Protection
BLOG BLOG
Infographic: 8 Types of Content Intermediary Web Companies Need to Detect & Block

Reading Time: < 1 minute The EU’s new Digital Services Act will, for the first time, hold online intermediary companies accountable for allowing harmful or illegal content […]

Brand Protection Content Moderation Counterfeit Detection Trademark Protection
BLOG BLOG
Infographic: 8 Things To Detect & Block On Your Marketplace To Be Digital Services Act Compliant

Reading Time: < 1 minute The EU’s new Digital Services Act will hold marketplaces and other online sales sites accountable for harmful or illegal content or products […]

Brand Protection Content Moderation Counterfeit Detection Trademark Protection

Trusted by the world's leading platforms, marketplaces and agencies

Integrate Visual-AI Into Your Platform

Seamlessly integrating our API is quick and easy, and if you have questions, there are real people here to help. So start today; complete the contact form and our team will get straight back to you.

  • This field is for validation purposes and should be left unchanged.