The numbers behind TikTok’s content safety efforts

The platform's latest transparency report shows how many videos it removed for violating its guidelines.
pexels-cottonbro-5081930

While social networks like Facebook and Twitter continue to face scrutiny for their approaches to moderating content and limiting misinformation, TikTok has released another transparency report outlining exact numbers on how it is removing content to create safe environments for both users and brands.

In the last six months of the year, over 89 million videos were removed from TikTok for violating Community Guidelines or Terms of Service – which, the company points out, is less than 1% of all videos uploaded. Over six million accounts were removed for violating Community Guidelines.

What’s notable about this is how quickly videos came down: 92.4% of those in violation were removed before a user could report them, 83.3% were removed before they received any views and 93.5% were removed within 24 hours of being posted.

The most common reason for video removal was minor saftey – a high prority for a platform with a very young user base – which includes things like showing minors in possession of tobacco or alcohol or engaging in dangerous acts. The next most-common reason was showing adult nudity or sexual activities, followed by illegal activities, violent content and harassment. Illegal activities, minor safety and violent content are also the most likely videos to be taken down proactively, being identified by TikTok’s technology before it is reported by a user, along with videos related to suicide or self-harm.

Outside of these kinds of violations, 9.5 million spam accounts were removed from TikTok, along with 5.2 million spam videos. Using automated tech, over 173 million accounts that were suspicious were prevented from being created in the first place.

On the advertising front, 3.5 million ads submitted to the platform were rejected for violating advertising policies and guidelines, which can range from making misleading claims to trying to promote scams or harmful products.

Another major priority for TikTok in the second half of 2020 was combating misinformation on the platform as the U.S. election took place and the COVID-19 pandemic continues.

TikTok added banners directing users to an information hub about COVID-19 to over three million videos, most of which were done automatically. The platform also connected PSAs directing users to the WHO and local public health resources on certain hashtags, which ended up being viewed over 38 billion times. Roughly 51,500 videos were removed for promoting COVID-19 misinformation, 86% of were removed before they were reported, 87% were removed within 24 hours of being uploaded and 71% had zero views.

On the election front, TikTok continues to not accept paid political ads, but users still upload their own political commentary and views to the platform. A banner, similar to the one for COVID-19, linking users to trusted election information and resources was added to nearly seven million videos. Almost 350,000 videos were removed for misinformation or disinformation reasons, while 441,000 videos flagged by fact checkers were not removed, but also not served to users in their “For You” content feed. PSAs about how to verify information and report misleading content attached to election hashtags were viewed 73.8 billion times.

The company pointed to a number of reasons for what it sees as signs of an effective content moderation approach, including investment in tools that can proactively identify offending content and limit the degree to which users are exposed to them, as well as working with fact checkers and experts to stay on top of the best ways to limit the spread of disinformation and identify areas where it might spread, such as certain hashtags.

Looking forward, the company plans to continue investment to proactively identify misleading content and repeat offenders, but also pointed to further education efforts with influencers, particularly when it comes to the need to disclose paid content.

Last week, TikTok also announced that its commitment to brand safety had resulted in its TAG Brand Safety Certified status – a certification by the Trustworthy Accountability Group first rolled out in December – had been expanded from the U.K. and into North America, the rest of Europe, Australia and New Zealand.