TikTok is putting hundreds of jobs at risk across the UK as part of a major restructuring of its trust and safety operations. The Chinese-owned social media giant is shifting increasingly towards artificial intelligence to police content on its platform.
The video-sharing platform currently employs more than 2,500 staff in the UK. Under the proposed global restructure, several hundred roles will be affected as the company concentrates operations in fewer locations worldwide.
Restructuring and AI shift
Duties from impacted UK staff will be transferred to other European offices and external providers. Some trust and safety positions and operations will remain in the UK, with the company's headquarters in Farringdon, London, and a new Barbican office due to open early next year.
TikTok has already been cutting trust and safety jobs globally, including at its German headquarters in Berlin. A company spokesperson said they were "continuing a reorganisation that we started last year to strengthen our global operating model for trust and safety."
More than 85% of content removed for breaching community standards is now detected and eliminated through automated systems. The company claims artificial intelligence can also limit exposure of distressing material to human moderation staff.
Union concerns and timing
The restructuring comes just one week after a planned union ballot was suspended, according to The Independent. A Communication Workers Union representative warned: "This news will put TikTok's millions of British users at risk."
The union official added that TikTok workers have been "sounding the alarm over the real-world costs of cutting human moderation teams in favour of hastily developed, immature AI alternatives." The Independent describes content moderation as the "most dangerous job on the internet" with workers exposed to "stuff of nightmares."
Regulatory pressure
The job cuts emerge weeks after the UK's Online Safety Act, overseen by Ofcom, took effect. The legislation requires online platforms to shield UK users from illegal content, including child sexual abuse material and extreme pornography, and block children from accessing harmful material.
TikTok's safety moderation staff receive training to recognise indicators that accounts may be operated by minors. The platform also deploys AI-powered systems to detect keywords and community reports suggesting underage users.
Sources used: "Chronicle Live", "The Independent", "Morning Star Online" Note: This article has been edited with the help of Artificial Intelligence.