Sarah Chander, Equinox initiative for racial justice, and Caterina Rodelli, Access Now, are members of #Protectnotsurveil coalition.
In August of last year, the historic artificial intelligence law of the European Union entered into force, a world first in the regulation of technology.
The law has promised – although imperfectly and incomplete – to protect people from the most dangerous and discriminating AI systems, while champion The “EU values” of trust, innovation and fundamental rights.
A year later, the world in which this legislation was drafted has largely disappeared.
Since then, we have witnessed a dramatic change in world and European policy Transatlantic race for AI supremacyA deregulation program in Brusselsand a militarization wave. These changes are not a background noise – they upset the hypotheses that have shaped the AI Act and force us to ask uncomfortable questions: can we always speak of how “IA governance” can balance rights and innovation when these rights are no longer part of the discussion?
The act of AI in the era of militarized technology
The AI Act was born from a contradiction between two irreconcilable objectives: regulate harmful uses of AI – in particular in the police, migration and surveillance – while aspiring to become a global superpower of AI. By 2024, this contradiction could no longer hold.
After the publication of Draghi reportwho criticized the approach to innovation and stagnant regulation of Europe, the European Commission has unveiled a radical deregulation program that “simplified” AI acts in the name of “competitiveness“In June 2025, Commissioner Henna Virkunnen confirmed That the few crucial guarantees of the AI Act could be diluted before their implementation in 2026.
Meanwhile, the return of American president Donald Trump to the White House began with a commitment to invest $ 500 billion in AI infrastructure led by private And considerably weaken American regulations. Its administration has also moved to purge what is called “Woke up»And accelerated the use of AI in monitoringpolice, and military Operations.
In the EU, the same priorities are approaching. Faced with the pressure to compete worldwide, the EU is choosing more and more income rather than rights. In the new EU Multi-annual financial framework for 2028 to 2034The Commission proposed Massive increase in military and border budgetsWhile social programs are faced with radical cuts. This means more public money for technology, security and military industries.
This redirection of public funds is reflected in a check in white sponsored by taxpayers to industry itself that the AI Act was supposed to regulate. Billions are channeled by the EU and the Member States in border biometric surveillance,, Predictive police software,, military quality drone systemsAnd Crowd monitoring tools fueled by AI – Everything with a meticulous examination and even less responsibility.
The reality of AI governance
AI is not neutral. Its owners manage a Nearly billions of dollars industry in which the the biggest government request is the defense. From Gaza At Evros border Between Greece and Turkey, European funds were operated by companies to support the development of AI technologies used to control, target and punish people. It is an automated repression, and it is booming under the EU watch.
What we are witnessing is not a temporary tension – is a revelation of what AI governance means in a militarized world.
Austria Recent use facial recognition to follow climatic activists and Hungary decision To legalize facial recognition to pride, the steps are not punctual abuses. These are overviews of the future to which we head towards, where AI policy is dictated by military demand and private profit, not civil rights.
These abuses have been ratified by European legislators. Under the AI Act Current loopholesThe authorities of the police and migration control benefit from large derogations, while the Member States can invoke national security to bypass basic protections. The predictive police, the risk score in migration procedures and biometric categorization based on breed or ethnicity indicators are all alarming possible. The recognition of emotions is also always authorized For use by law enforcement officials and migration. European States continued to Develop surveillance executives – In particular those who target migrants, racialized and marginalized communities.
Towards a technological policy for people, not the security industry
In moments like this, it becomes clear that EU policy reflects the interests of those in power, and these interests are not ours.
We must stop claiming that rights can be balanced with profit, or that a vast deregulation can coexist with dignity. It is not a fight for the best version of AI. It is a fight against a political program where surveillance, control and extraction are sold as innovation.
This means rejecting the idea that competitiveness justifies the reduction of protections. This means repelling, strengthening the prohibitions on mass surveillance, questioning the vast digital border systems that Europe deploys to prevent migration and hold responsible governments when they finance private surveillance with public money.
And that also means something deeper: we need visions of how we spend public resources that meet the needs of everyday people, not to companies that shape our world. Technological policy must be rooted not in military logic or the efficiency of the market, but in care, equity and justice.
The AI Act will not become fully applicable until August 2026. The next 12 months are essential. Civil society, journalists, researchers and activists must deal with this not as a moment of celebration, but as a critical window to resist the erosion of hard -won protections.
We cannot afford to sleep in the future where “governance of AI” is only an euphemism for automated repression. Technological legislation must work for people, not for lucrative.