By Aurelio Mustacciuoli, a Senior Advisor of the Head of Banking and Finance at Eurogroup Consulting Italy. He was a Co-founder of Gooldie. He has a master’s degree in Management from SDA Bocconi and studied Mechanical Engineering at the Sapienza University of Rome.
Thank you for the invitation. I’m here as director of Free Academy, a conservative libertarian think tank.
First, I’d like to share a basic assumption that many could agree on: regulation on AI is necessary. But what and how we regulate make the difference.
That said, Europe’s current risk approach is turning AI from a driver of abundance into an infrastructure of control, and at the same time into a competitive handicap.
The theme of this panel—“regulatory quagmire”—asks us to distinguish between rules that enable and rules that homogenize.
We can agree that we want: “Rules that protect without turning into a brake.”
And here comes the point I’d like to address:
- we truly are sliding into a quagmire of cumulative burdens;
- this approach does not protect citizens—on the contrary, it increases the risks for citizens and businesses.
I will articulate my speech in three sections: where we are, where we go, what to do.
I listen to this, and after 30 seconds, I am confident to say that the ship has sailed. Europe will play absolutely no role in the AI race.pic.twitter.com/rnMa21GTx8
— Fabian Wintersberger (@f_wintersberger) October 6, 2025
Where we are
Let’s see where we are and focus first on the regulatory framework and then on the current competitive landscape.
In terms of regulations we can sum up the situation this way: “Not one rule, but the sum of many rules.”
I’m not a lawyer; I’m a libertarian engineer with a passion for the Austrian School of Economics, and I am currently building digital platforms powered by AI agents. From an insider’s perspective, I see a regulatory cordon choking AI:
- The AI Act, with its risk-based approach: it enforces bans and AI literacy from Feb 2025, obligations for general-purpose models from Aug 2025, and will enforce full application for many high-risk systems by Aug 2027.
- GDPR, with its rules on personal data; it carries legal implications for AI in relation to model training—this training, in fact, must not override data-subject rights.
- DSA: aimed at reducing “systemic risks” in information ecosystems; it requires algorithmic audits for large platforms.
- The new Product Liability Directive: it foresees liability for software and AI when deemed a “defective product.”
Alongside, we have the European AI Office, the “control tower,” issuing guidance and coordinating oversight, especially over GPAI (general-purpose AI models).
Draghi notes that "while the ambitions of the EU's GDPR and AI Act are commendable, their complexity and risk of overlaps and inconsistencies can undermine developments in the field of AI by EU industry actors." A slap for EU policymakers who boast the 'Brussels effect.' 4/10 pic.twitter.com/hv5KQNGPyQ
— Luca Bertuzzi (@BertuzLuca) September 9, 2024
In terms of the competitive landscape: “We are far behind” (USA and China).
The Commission announced a €50bn funding plan for AI, aiming to mobilize up to €200bn of public/private investments. Important numbers—but they won’t fix the gap if the environment pushes spending into compliance rather than innovation.
It is indicative that the proposed AI Liability Directive was withdrawn in 2025—an early sign that complexity was already overflowing.
In real life: “A startup launching an assistant for businesses needs to manage: AI Act (documentation/assessment), DSA (information risk if it scales), GDPR (legal basis/datasets), PLD (liability). Four tracks, one small team: compliance eats product time.”
Where we go
This regulatory cordon on AI has a double effect: over-regulation and monoculture.
The first is factual: many rules, tight timelines, and guidance that becomes de facto standard.
The second arises when rules and practices standardize the set of values embedded in models.
How? For example, when DSA & guidance require platforms to mitigate information risks in election periods—with AI-content labels, reduced virality, disclosures. Transparency is good; but the risk is sliding toward mandatory alignment on lawful yet contentious content. Or when ECAT (EU Centre for Algorithmic Transparency) doesn’t stay technical and neutral and creates single standards for ranking and moderation.
In simple terms: “If a guideline suggests reducing visibility of ‘controversial’ topics even when lawful, platforms soon align: a ‘voluntary’ rule becomes an obligation de facto.”
In order to avoid this, regulators should change their mindset—stop trying to encode the “common good” into the models, and instead act to guarantee pluralism.
And how, concretely? For example with:
Ethical fork: that allows coexistence of models with different ethical baselines, including a libertarian ethic centered on inalienable individual rights. Imagine: “Two models, same question; one optimizes for collective equity, the other for individual liberty. The user chooses the moral compass.”
Portability of value preferences: the user chooses—and carries—their profile (as with data portability). Imagine: “A values profile—libertarian, communitarian, conservative, progressive—that I can export from one model to another: same preferences, consistent answers.”
Filter transparency: making it clear how and why content is ranked/filtered; clearly separating illegality (to remove) from legitimate dissent (to preserve). Imagine: “A ‘Minimal filtering’ toggle in the feed and a concise report showing what was de-amplified and why: the choice returns to me.”
Now that we have seen how we are sliding into over-regulation and monoculture, let’s ask ourselves what concrete risks this entails for citizens and businesses.
The EU’s rules on artificial intelligence are driving tech workers and companies to Silicon Valley, Chief Financial Officer Roger Dassen from the Dutch chipmaking giant ASML has said.https://t.co/RrG2tCZBUr
— POLITICOEurope (@POLITICOEurope) October 7, 2025
I see four main risks:
- Cultural Conditioning and Cultural Stagnation
“If the feed is one, society speaks with one voice.”
An algorithmic monoculture pushes society toward the dominant ideology, shrinking the space for legitimate dissent and cultural change.
An example? “If ‘recommended filters’ systematically cut minority—but legal—views, citizens see fewer alternatives—and vote on a flatter information diet.”
- Centralization of Power
“Technology is neutral; power architectures are not.”
The EU bans extreme practices like social scoring, etc.: good. But if central control rooms and codes of conduct push major models toward one set of values, AI becomes a tool of control more than a tool of development. The defense is competitive pluralism: more models, user-selectable filters, open standards for personal agents (no opaque kill switches).
An example? “An assistant locked to one platform and subject to hidden remote kill switches puts control in central hands, not the user’s.”
- Loss of Competitiveness
“If capital goes to compliance, it doesn’t go to innovation.”
European CEOs have asked for deferrals on parts of the AI Act due to cost/complexity. Cumulative burdens (AI Act + DSA + GDPR + liability) favor incumbents and discourage startups/open source. Even €50–200bn is a drop in the ocean if the context slows innovation: the result is that we fall behind on foundation models and robotics.
An example? “If 1 out of 4 euros goes to audits/consulting, only 3 remain for R&D and market: the startup runs slower—and so does Europe.”
- AGI & Second-Order Existential Risk
“If ends justify means, means can crush ends.”
A “collectivist” AGI can normalize anti-individual trade-offs (“sacrifice 100 to save 1,000”). It’s a second-order risk, but real: without pluralism, alignment becomes dangerous or even lethal.
An example? “In an emergency, a system aligned to pursue the ‘common good’ may ‘rationalize’ individual sacrifices; with pluralism, that trade-off is not pre-set.”
Without ethical pluralism, decentralization, proportionate costs, and competitive openness, European AI risks being highly compliant yet low-value—and dangerous.
What to Do
We just stated that we should “Pursue pluralism, free choice, proportional cost, openness of models.” Yes, but how? Let’s see some practical examples.
- Ethical pluralism should be encoded in AI Act implementation
Guidelines and codes should make it clear that the goal is not a single moral alignment, but co-existence of models with diverse ethics, within legal/safety limits. As well, portability of the value profile should be encoded as a user right (consistent with the Data Act).
Thus: “As we port data between services, we also port moral preferences: AI works for us, not on us.”
- Similarly, legitimate disagreement should be explicitly protected in the DSA
Illegal content (which should be removed) should be clearly separated from lawful but contentious content (which should be subject to user choice and control). Multiple feeds and filters must be ensured (including a ‘Minimal filtering’ option), the effects of ranking disclosed, and real opt-outs allowed—especially in sensitive periods.
Such as: “During elections, users can choose an open feed and see clear indicators of any demotions applied.”
- Competitiveness should be a top priority and pursued with smart proportionality + real sandboxes
This implies:
- modulating obligations for SMEs (Small and Medium-sized Enterprises) where risks are low;
- allowing sandboxes with firm timelines and mutual recognition across Member States;
- pushing public investment into shared computing, open datasets/benchmarks, robotics, and open standards, in order to avoid lock-in.
For instance: “A sandbox with a 6-month cap and EU-wide effect: you exit with a binding checklist—know what to change and scale without re-doing everything.”
In conclusion,
AI can become the greatest infrastructure of abundance — or the perfect infrastructure of control. The difference lies in the direction of regulation.
Ethical pluralism of models, user choice, proportional compliance burdens, and competitive openness are the keys for AI to unlock innovation and freedom in Europe.
“Regulate to liberate, not to direct.”
This is a transcript of the author’s speech at the conference on “How will AI transform Europe ?”, hosted by the Patriots for Europe Foundation in Brussels on 30 September 2025
Disclaimer: www.BrusselsReport.eu will under no circumstance be held legally responsible or liable for the content of any article appearing on the website, as only the author of an article is legally responsible for that, also in accordance with the terms of use.