Back to Blog
Harmful Content - Parenting Blog

What is Harmful Content Online?

digital parenting online safety Nov 06, 2023

Although the internet has myriad benefits to offer, it can also be home to all sorts of harmful content. From hate speech and cyber-bullying to explicit material and misinformation, this content has the potential to inflict emotional, psychological, and sometimes even physical harm on individuals.

In this article, I'll cover what constitutes harmful content online and the steps legislators are taking to reduce its prevalence and impact.

 

What's the definition of harmful online content?

Harmful content is a broad term that encompasses many different things. Generally speaking, harmful content is any online material that causes distress or harm. However, an individual's interpretation of harmful content will be subjective based on factors such as their cultural, religious, and legal perspectives.

 

Types of harmful content

Although there are many categories of harmful content, here are some of the more common types.

  • Hate Speech
  • Cyberbullying
  • Harassment
  • Discrimination
  • Explicit, graphic, or violent content
  • Misinformation
  • Promotion of drugs and illicit activities
  • Indecent and/ or abusive imagery
  • Terrorist or extremist material
  • Malware, viruses, scams, and phishing

 

How is harmful content being tackled?

Awareness is increasing of just how impactful harmful content can be, particularly for young people growing up in this digital age. Legislators are looking at ways to get tech companies to take more responsibility for their part in providing a platform for damaging content.

Specifically, governments are introducing new laws and legislations designed to protect the most vulnerable online.

 

The UK Online Safety Bill

The government in the United Kingdom is due to introduce a new law relating to online safety. The Online Safety Bill is largely focused on protecting children and young people online, although there are measures for safeguarding adults, too.

The purpose of the Bill is to make it more difficult for children and young people under 18 to access harmful online content. It will require tech companies, social media platforms, and online service providers to take proactive measures to protect children online.

The Bill will apply to any site, platform, or online destination accessible from the UK, regardless of where the online service is based or registered.

For more information on this new law, read my guide to the UK Online Safety Bill.

 

How do I report harmful online content?

Many online platforms already have content moderation systems and reporting mechanisms in place.

You can use these to report any harmful content you come across. Depending on the platform, the content is reviewed by either a human moderator or an automated system. If the content violates the platform's policies or guidelines, it may be taken down. In some cases, the account that posted the content can be suspended or even terminated.

While it's good that online providers have these measures in place, it's clear that they need to be more rigorous. This is why laws are being introduced to tighten protections for young internet users.

 

Reporting to Ofcom

Ofcom is the UK's communications regulator. They will oversee the Online Safety Bill's regulation and will have the power to hold companies to account that are in breach of it.

Until the Bill is passed into law, you can only submit a complaint to Ofcom if "you have reported content to a UK-established video-sharing platform but are unhappy with the outcome or if you have specific concerns about the platforms' safety measures – for example, any problems with reporting, flagging or age verification functions." At this point, you submit a complaint to Ofcom.

Once the Online Safety Bill has come into effect, Ofcom will have increased powers to tackle companies that breach the legislation.

 

Taking steps towards a safer digital world

At the moment, we're heavily reliant on a combination of moderation, reporting, and individual vigilance to limit the detrimental effects of harmful content online. However, legislation is beginning to catch up with the need for more rigorous protection measures (particularly for children).

The battle against harmful content online has only just begun; it will continue to demand a collective approach from individuals, communities, governments, and technology platforms alike.

Don't miss out!

Get all the latest digital parenting news delivered to your inbox. 

We hate SPAM. We will never sell your information, for any reason.