HUNGARY VS DISINFORMATION: REGULATION WITH NEGLECTED ENFORCEMENT

As part of the Visegrad Alliance for Digital Rights and Disinformation Defense project, an expert roundtable discussion was organized on 30 September 2025 by the ELTE CSS Institute for Legal Studies in Budapest.

Participants of the roundtable:

  • Gábor Polyák – Senior research fellow, ELTE CSS Institute for Legal Studies
  • Nóra Falyuna – Assistant Professor, Ludovika University of Public Service
  • Boldizsár Szentgáli-Tóth – Senior research fellow, ELTE CSS Institute for Legal Studies
  • Rudolf Berkes – Researcher ELTE CSS Institute for Legal Studies; PhD student ELTE Doctoral School of Sociology

During the discussion, invited experts reviewed and debated the results of the National Report on the regulatory environment of disinformation, content regulation and AI in Hungary. We appreciate the comments and insights of our colleagues and participants, especially Orsolya Ferencz who was also directly involved in writing the report. This blogpost summarizes the comments and insights received during the roundtable.

According to our participants, Hungary stands at a complex crossroads in the fight against disinformation, where legal frameworks exist on paper but practical enforcement reveals significant gaps and political tensions. The country’s regulatory environment presents a paradoxical landscape characterized by comprehensive legislation alongside systemic under-enforcement and growing political pressures on independent oversight.

Hungarian law lacks a general statutory definition of “disinformation,” with the term appearing only in government policy documents rather than binding legal provisions. This absence of precise legal definition creates uncertainty for both content creators and enforcement authorities. However, the country’s Criminal Code addresses related offenses through the “scaremongering” provision (Section 337), which criminalizes publishing or disseminating false facts capable of causing public panic, with penalties escalating to five years’ imprisonment during special legal orders such as states of emergency.

The COVID-19 pandemic marked a turning point in Hungary’s approach to information control. The 2020 Authorization Act permanently extended penalties for “fearmongering” during states of danger, leading to over 100 investigations targeting alleged fake news, primarily focusing on journalists questioning government preparedness. This development illustrated how emergency powers can reshape the information landscape long after the initial crisis passes.

The National Media and Infocommunications Authority (NMHH) serves as Hungary’s primary digital content regulator and Digital Services Coordinator under the EU’s Digital Services Act (DSA). However, the institution faces significant challenges regarding independence and capacity. Civil society organizations have raised concerns about the NMHH’s political independence, with some EU parliamentarians expressing apprehensions about the authority’s ability to enforce digital regulations impartially.

Experts believe that the enforcement gap between formal legislation and practical implementation represents one of the most significant challenges in Hungary’s regulatory environment. Criminal statistics reveal extremely low prosecution rates for hate crimes and incitement, with most proceedings terminated at the investigative phase due to systematic investigative failures. The European Court of Human Rights has ruled against Hungary in four hate crime cases, finding violations of fundamental rights due to omissions by law enforcement authorities.

Hungary also faces new challenges with the rise of artificial intelligence in political communication. The 2024 European Parliament election campaign saw extensive deployment of AI-generated propaganda by the ruling Fidesz party against opposition politicians, with proxy organizations scaling up AI-generated video production ahead of the 2026 general elections. Despite these developments, Hungarian electoral regulations contain no specific provisions addressing AI-generated media transparency or disclosure requirements.

The government has established the Hungarian Artificial Intelligence Office through Government Decision 1301/2024 to oversee EU AI Act implementation, including requirements for labeling AI-generated content. However, the practical effectiveness of these measures remains to be tested in the rapidly evolving digital landscape.

Recent legislative developments reveal increasing pressure on independent media and civil society organizations. The controversial “Transparency of Public Life” bill (T/11923) would have empowered the Sovereignty Protection Office to blacklist organizations receiving foreign funding without meaningful judicial review. While the proposal was postponed following widespread protests and resistance from professional organizations, it demonstrates the ongoing tension between information control and democratic freedoms.

The 2024 Act on the Suppression of Internet Aggression introduced new administrative measures for harmful online content, expanding the toolkit available to authorities while raising concerns about potential overreach. These developments reflect a broader pattern where the government uses the pretext of “influencing public discourse” to justify expanded regulatory powers.

Hungary’s regulatory environment reflects broader European struggles with balancing free expression, democratic governance, and effective content moderation in the digital age. While the country has implemented EU regulations like the DSA and terrorist content regulation, questions remain about the independence and effectiveness of enforcement mechanisms.

According to the experts, the systematic under-enforcement of existing hate speech laws, combined with the expanded use of “scaremongering” provisions against critics, suggests that Hungary’s approach prioritizes political considerations over consistent rule application. As AI-generated content becomes increasingly prevalent in political discourse, the need for transparent, proportionate, and democratically accountable regulatory frameworks becomes ever more urgent.

The Hungarian case demonstrates that formal compliance with EU digital regulations does not guarantee effective protection of democratic discourse. The country’s experience serves as a cautionary tale about how political pressures can undermine even well-designed regulatory frameworks, highlighting the crucial importance of institutional independence in maintaining democratic accountability in the digital age.

Author: Rudolf Berkes, Researcher at ELTE CSS Institute for Legal Studies, and PhD student at ELTE Doctoral School of Sociology.

The publication presents outputs of the research project: ‘Visegrad Alliance for Digital Rights and Disinformation Defense,’ supported by the Visegrad Fund (Project No: 22430134). The project is co-financed by the governments of Czechia, Hungary, Poland, and Slovakia through Visegrad Grants from the International Visegrad Fund. The mission of the fund is to advance ideas for sustainable regional cooperation in Central Europe.

    Leave a Reply

    Your email address will not be published. Required fields are marked *