- The Visual-AI People
- Use cases
- Contact Us
At this point, you will have at least heard of, if not seen, the shocking clothing worn by some people participating in the Capitol Hill riot on the 6th of January 2021. This merchandise, emblazoned with deplorable slogans like “Camp Auschwitz” and “6MWE” (“6 Million Wasn’t Enough”) rightly caused disgust, outrage and fear across social and mainstream media. The origin of the “Camp Auschwitz” shirts was traced back to New York company TeeHands. Other Neo-Nazi influenced prints have been found across many similar sites, causing the public to ask: How are these prints even making it past their trust and safety procedures?
Since the riots took place, online marketplace Etsy and Print-On-Demand company Teespring have issued apologies. Both plainly stated that this shouldn’t have happened and that policies and procedures in place to prevent this kind of material from being sold on their site clearly failed. This should be a cautionary tale for all print-on-demand companies and marketplaces; If your procedures aren’t as tight as possible, and your technologies not fit-for-purpose, you’ll end up having to make an apology like this eventually
This isn’t the first time Etsy and Teespring have had to remove items and apologize for hosting far-right clothing on their website. In October 2020 they pulled merchandise representing far-right group Proud Boys and many of the large marketplaces are only now saying they will make efforts to pull merchandise representing controversial conspiracy group QAnon. It’s something all marketplaces have been battling against for years with growing intensity since Trump’s presidential campaign began in 2015. Some of the offensive material is still available online on other platforms that have not been called to task in the same way as Etsy, Teehands and Teespring, such as Print-On-Demand website All Blue Tees.
So it would seem there is a systemic problem within these companies caused either by a lack of real interest in solving it or, perhaps more likely, a lack of knowledge about available technologies that can solve this problem.
Many of these companies have in-house trust and safety teams that manually review content that is uploaded. However, even they recognise that the volumes of designs uploaded every day far exceed their teams’ capabilities to monitor it adequately. Others use custom software that is meant to raise a red flag on terms and imagery that might cause harm or offence, but in these cases, possibly because of the capabilities of the systems in question, it seems to flag content only after the products are up and live. It’s therefore fair and accurate to say that there is room for improvement, and it’s needed urgently.
The answer is in technology that can accurately detect and block content that breaches a company’s acceptable content guidelines. Whether they develop these systems in-house or choose an external supplier to provide it, they must assure themselves that they won’t leave gaps that allow content like this to be promoted and leaving their business at risk of a ruined reputation, boycotts and even lawsuits.
But given the scale of the problem here, and the realistic point of view that this is only going to continue, using an already tried and tested system, from a provider that specialises in this form of visual recognition, is the best way to eliminate these risks.
The company could load up imagery related to problematic designs. In this case imagery related to skulls, iron eagles, nazi insignias, etc. They can also load up ‘MAGA’ logos. They then direct the system to look for anything that matches what’s uploaded, and significantly, anything that resembles it – a catch-all if you will. This technique can identify anything that matches imagery in their library at the point of upload. Sure, it might be a cute pirate shirt for a kids party or for Halloween, in which case the trust and safety team can release it, but if it’s far-right related, then it never made it onto the site and the uploader can be informed immediately.
Text that has been converted into pixels can also be troublesome. If someone wants to buy a MAGA shirt, fine. There’s nothing wrong with that. But a print company might (should??) determine that MAGA plus the word ‘WAR’ is something that should be flagged. Text detection allows a company to create a list of trigger words. When a design is uploaded, Visual-AI can look at the image and extract anything that looks like text, converts it to machine-readable text, and check it against the trigger list. If it matches, either block it or send it for review.
This all means anything with Neo-Nazi or Far Right associations, or indeed anything that could cause harm or damage can be stopped in its tracks, but also that anything ‘innocently’ flagged can be manually approved.
With a system like this, there is no reason a company should ever unknowingly host the likes of the Camp Auschwitz shirts again.
If companies are not using a tight system as described above they are also leaving themselves open to copyright infringement lawsuits that could leave an alarming dent in any business’ revenue.
70% of companies like these are ignoring copyright laws by not employing AI-driven technology. And it can’t be said enough, having anti-copyright policies in your terms of the agreement aren’t enough! Take the example of the case in 2018 when a judge awarded damages of $19.2 to Harley Davidson in a copyright infringement lawsuit against Print-On-Demand company, Sunfrog.
GearLaunch has testified to the effectiveness of having technology like VISUA’s in place, confirming that it has helped them to increase legal compliance and improve relations with brand protection enforcement.
It’s not just about protecting the business from controversy, it’s about protecting the business full stop! From protection against the risk of legal cases, revenue loss and a damaged reputation, marketplaces and print-on-demand can’t really afford to have loose standards in this department.
Print-on-Demand companies and Marketplaces have had many slip-ups when it comes to hosting offensive, infringed and harmful merchandise on their platforms over the years. It’s clear the systems in place at many of these organizations aren’t performing as well as they should, not if we are seeing articles saying they have had to pull merchandise every few months. So we have to ask, what are they waiting for?
With calls on Twitter and Reddit for a boycott of some of these platforms becoming all too regular, there is no telling what the straw that breaks the camel’s back will be, putting one of them out of favour, and out of business, for good. The time to improve their trust and safety methods is now!
Partnership will see De La Rue integrate VISUA’s Holographic Authentication Engine into its new DLR Validate™ solution. Visual-AI leader, VISUA, today announces […]Brand Protection Counterfeit Detection Product Authentication VISUA News
Your Trademark & Copyright Protection Solution Powered By Best-In-Class Enterprise Visual-AI Protecting your brand and products is not only key for established […]Brand Protection Counterfeit Detection Trademark Protection
Out team will focus on understanding what you need to achieve and demonstrate the benefits of our technology, such as: