AI-driven content moderation on social media raises user concerns
The NMHH has commissioned a study that sheds light on how Facebook and YouTube moderate content and restrict accounts. According to study author Zsolt Ződi, large platforms use artificial intelligence (AI) to make millions of sanction decisions every month. These are often implemented without prior human review. Although the EU Digital Services Act (DSA) requires transparency in these processes, users are usually only inadequately informed about the reasons, if at all. Although those affected can lodge an objection, the complaints procedures are mostly automated, meaning that blocks are rarely lifted.
The major platforms do not publish country-specific data on their moderation practices. The National University of Administration (NKE) therefore conducted a representative survey. According to this study, around 15% of respondents – almost half a million Hungarians in total – have already had their content deleted or restricted. This represents an increase of five points compared to previous years. Half of those affected had even been blocked several times, while a quarter of accounts were blocked permanently. Only 10% of the blocked posts and accounts were subsequently restored.
Apart from clearly illegal content, YouTube even allows the removal of content that ‘could cause harm’ to the service. Typical reasons for blocking are spam and fake accounts. Hundreds of thousands of posts on hate speech and disinformation are also restricted. In many cases, however, posts are also shadow banned without the user’s knowledge by not appearing in the feeds of others. The study emphasises that although online platforms have become indispensable, they enable significantly more human contact and therefore need to employ considerably more staff.