1aicalin georgescucontent moderationdsaFeaturedharmful contentromania

After Election Chaos, Romania Proposes Platforms Invent Magic Wand As Content Moderation Solution

from the yeah,-sure,-that’ll-work dept

When faced with a difficult technology policy challenge, policymakers can respond in one of two ways: with careful analysis of what’s technically possible, or by demanding Silicon Valley simply wave a magic wand and make all problems disappear. Romania has chosen… the latter.

The backstory here is pretty straightforward: A Russian-supporting candidate leveraged TikTok to build surprising momentum in Romania’s presidential election. The results were controversial enough to get thrown out. And now Romania wants to make sure this never happens again by… well, by proposing content moderation rules that seem designed to ensure no social platform will ever operate in Romania again.

The country banned the candidate, the far-right Calin Georgescu, from running in the rematch, upsetting both Russia and its new best friend, the United States government.

The new regulations read like someone with zero understanding of content moderation took the EU’s Digital Services Act, already a challenging set of rules, and decided to make it exponentially more impossible. How impossible? Just look at what kind of wackiness Romania is actually proposing:

  • The Romanian proposal caps the spread of potentially harmful content to 150 users, whereas the DSA only requires platforms to mitigate systemic risks.
  • The proposal mandates faster content removal of illegal content – within 15 minutes. The DSA only requires platforms to act “expeditiously” without a fixed deadline.
  • Platforms must classify content within 15 minutes, which is not required under the DSA.
  • If more than 30% of user-reported content is confirmed as illegal, platforms are fined 1% of turnover. The DSA imposes fines up to 6% of global turnover but does not use a user-report validation metric.

This is all, to put it mildly, nonsense.

These requirements fundamentally misunderstand both how content spreads online and how human nature works. The idea that platforms can somehow prevent “harmful” content (itself a nebulous and subjective term) from reaching more than 150 users ignores both the technical reality of how social networks function and the practical impossibility of accurately classifying content within minutes of it being posted. Even with unlimited resources and perfect AI (neither of which exist), these goals would remain impossible.

This would be difficult enough to do with a single piece of content. Now scale it up to many millions of pieces of content. Every day.

All rules like this are likely to do is have various apps block Romania entirely. And, perhaps that’s the goal. But it hardly seems like a productive approach.

Of course, as the report on this notes, the supporters of this approach somehow think that AI can solve it all:

Regarding the feasibility of these obligations, the explanatory memorandum of the proposal suggests that companies should be capable of triaging illegal content using artificial intelligence, which has demonstrated efficiency in recent studies.

This reflects the increasingly common fallacy that artificial intelligence is just the newest way to “nerd harder” — a magic wand that can somehow solve impossible content moderation challenges. The legal analysis linked above gives a quite understated warning that using AI to try to meet these requirements “could lead to errors and potentially infringe on human expression,” which barely scratches the surface of the problem.

What’s particularly ironic here is that Romania’s proposal simultaneously (1) demonstrates deep distrust of social media platforms while (2) demanding those same platforms deploy AI systems with godlike capabilities to perfectly moderate content. It’s asking companies they don’t trust to somehow build impossible technology they don’t understand.

Romania’s concerns about election interference are legitimate — they can point to very real harm from their recent election chaos. But responding with technically impossible demands isn’t just ineffective, it’s counterproductive. It distracts from developing actual solutions while creating compliance requirements that will likely drive services to simply block Romanian users entirely.

What we need instead are policymakers who understand both the real challenges of content moderation at scale and the actual capabilities and limitations of current technology. Until then, we’ll keep seeing this cycle: legitimate problems leading to impossible demands, followed by either widespread censorship or complete platform withdrawal. Neither outcome serves Romanian citizens or democracy.

Filed Under: , , , , ,

Source link

Related Posts

1 of 48