Article

Matthieu Boutard
Matthieu Boutard 18 January 2023

The Impact of Toxic and Harmful Content on Brands, Their Teams and Customers

Online toxicity can be damaging for brands, impacting the well-being of their frontline staff and creating a real commercial impact for brands if their customers are exposed to it. So, how can companies work to alleviate the negative effects?

Here, Matthieu Boutard, President and co-founder of Bodyguard.ai, outlines the benefits and challenges of content moderation and explores how companies can take a blended approach to achieve the best outcomes. 

With the Online Safety Bill set to come into UK law in the coming months, much attention has been paid to the negative impact of social media on its users. 

The goal of the bill is to deliver upon the government’s manifesto commitment to make the UK the safest place in the world to be online. However, it will need to strike a critical balance to achieve this effectively. 

According to the Department for Digital, Culture, Media and Sport (DCMS), it aims to keep children safe, stop racial hate and protect democracy online, while equally ensuring that people in the UK can express themselves freely and participate in pluralistic and robust debate.

The bill will place new obligations upon organisations to remove illegal or harmful content. Further, firms that fail to comply with these new rules could face fines of up to £18 million or 10% of their annual global turnover – whichever is highest.

Such measures may seem drastic, but they are becoming increasingly necessary. Online toxicity is rife, spanning all communications channels, from social media to in-game chat. 

In exploring the extent of the problem, we recently published an inaugural whitepaper examining the online toxicity aimed at businesses and brands in the 12 months that ended July 2022.

During this process we analysed over 170 million pieces of content across 1,200 brand channels in six languages, finding that as much as 5.24% of all content generated by online communities is toxic. Indeed, 3.28% could be classed as hateful (insults, hatred, misogyny, threats, racism, etc), while 1.96% could be classed as junk (scams, frauds, trolling, etc). 

Three Key Challenges of Content Moderation

Unfortunately, the growing prevalence of online hate and toxic content is increasingly seeping into brand-based communication channels such as customer forums, social media pages, and message boards.

For brands, this can have a significant commercial impact. Indeed, one study suggests that as many as four in 10 consumers will leave a platform after their first exposure to harmful language. Further, they may share their poor experience with others, creating a domino effect of irreparable brand damage. 

It is therefore important that brands moderate their social media content to remove toxic comments. However, doing this effectively is no easy task, and there are several potential challenges.

First, it can be a highly resource-intensive and taxing task to complete manually. A trained human moderator typically needs 10 seconds to analyse and moderate a single comment.

Therefore, if there are hundreds or thousands of comments posted at the same time, it can become an impossible task to manage the flow of hateful comments in real time. Resultantly, many content moderators are left mentally exhausted from the volume of work.

Additionally, being repeatedly exposed to bad language, toxic videos, and harmful content can have a psychological effect on moderators. Indeed, the mental health of these individuals cannot be overlooked, while further burnout from toxicity can be costly to businesses, potentially accelerating employee turnover.

Thirdly, companies need to tread a fine line when moderating to ensure they aren’t accused of censorship. Brand channels such as social media are often a primary source for customers engaging with brands, providing their feedback and holding brands to account. Those that give the impression that they are simply deleting any critical or negative comments may also come under fire.

A Blended Approach for Balanced Outcomes

Fortunately, AI and machine learning-powered technologies are beginning to address some of the challenges facing human moderators. However, there are further issues that need to be ironed out here. 

Machine learning algorithms currently used by social platforms such as Facebook and Instagram have been shown to have an error rate that can be as high as 40%. As a result, only 62.5% of hateful content is currently removed from social networks according to the European Commission, leaving large volumes of unmoderated content out there that can easily impact people and businesses.

What’s more, these algorithms also struggle to manage the sensitive issue of freedom of expression. In lacking the ability to detect linguistic subtleties, they can lean too far on the side of censorship as algorithms are prone to overreacting.

With both human moderation and AI-driven solutions having their limitations, a blended approach is required. Indeed, by combining intelligent machine learning with a human team comprising linguists, quality controllers and programmers, brands will be well-placed to remove hateful comments more quickly and effectively.

Of course, selecting the right solution here will be key. Ideally, brands should look to adopt a solution that is advanced enough to recognise the differences between friends interacting with “colourful” language, and hostile comments directed towards a brand. 

Striking this balance is vital. To encourage engagement and build trust in online interactions, it is crucial that brands work to ensure that toxicity doesn’t pollute communications channels while also providing consumers with a platform to criticise and debate.

Thankfully, with the right approach, moderation can be effective. Indeed, it shouldn’t be about prohibiting freedom of expression but preventing toxic content from reaching potential recipients to make the internet a safer place for everyone.

Please login or register to add a comment.

Contribute Now!

Loving our articles? Do you have an insightful post that you want to shout about? Well, you've come to the right place! We are always looking for fresh Doughnuts to be a part of our community.

Popular Articles

See all
The Impact of New Technology on Marketing

The Impact of New Technology on Marketing

Technology has impacted every part of our lives. From household chores to business disciplines and etiquette, there's a gadget or app for it. Marketing has changed dramatically over the years, but what is the...

Alex Lysak
Alex Lysak 3 April 2024
Read more
Infographic: The State of B2B Lead Generation 2024

Infographic: The State of B2B Lead Generation 2024

A new report from London Research and Demand Exchange looks at the latest trends in B2B lead generation, with clear insights around how lead gen leaders are generating the quality and quantity of leads they require.

Linus Gregoriadis
Linus Gregoriadis 2 April 2024
Read more
How much has marketing really changed in the last 30 years?

How much has marketing really changed in the last 30 years?

Have the principles of marketing changed in the age of the Internet? Or have many of the key fundamentals of the discipline stayed the same?

Ben Hollom
Ben Hollom 15 April 2024
Read more
How to Review a Website — A Guide for Beginners

How to Review a Website — A Guide for Beginners

A company website is crucial for any business's digital marketing strategy. To keep up with the changing trends and customer buying behaviors, it's important to review and make necessary changes regularly...

Digital Doughnut Contributor
Digital Doughnut Contributor 25 March 2024
Read more
Embracing AI: No, AI is NOT Coming for Your Job

Embracing AI: No, AI is NOT Coming for Your Job

The advent of artificial intelligence (AI) has stirred both intrigue and apprehension across various industries, particularly in the realms of technology, business and content creation. Amidst discussions surrounding...

Jon Kelly
Jon Kelly 17 April 2024
Read more