Research reveals that EU AI rules stop at its borders with little accountability for human rights impacts abroad
By Nadim Nashif
This post is part of Global Voices’ April 2026 Spotlight series, “Human perspectives on AI.” This series will offer insight into how AI is being used in global majority countries, how its use and implementation are affecting individual communities, what this AI experiment might mean for future generations, and more. You can support this coverage by donating here.
The European Union (EU) presents itself as the world’s most ambitious regulator of artificial intelligence (AI). With the AI Act, the Digital Services Act, the Corporate Sustainability Due Diligence Directive (CSDDD) or the Dual-Use Regulation, Brussels has built an elaborate architecture of rules designed to ensure that technology serves human rights. What that architecture does not do is follow the technology when it crosses a border.
Our research at 7amleh, the Arab Center for the Advancement of Social Media, traces what happens when it does.
A system, not isolated failures
What we found is not a series of isolated procurement mistakes or funding oversights. It is a pattern. A structured, largely opaque system through which European money flows into high-risk AI and surveillance technologies, which then reach governments and militaries in the West Asia and North Africa (WANA) region, including Israel, with no binding obligation to assess what those technologies will be used for, against whom, and at what human cost.
The mechanism has three main channels.
The first runs through migration control. Under the EU’s main instrument for third-country cooperation, 10 percent of the financial envelope is earmarked for migration governance. In 2023 and 2024, the EU signed agreements with Egypt, Morocco, Tunisia, and Lebanon, conditioning aid on those governments’ cooperation in controlling irregular migration. What followed was the transfer of biometric identification systems, traveller screening tools, smart border gates, and maritime surveillance infrastructure.
The funding is rarely direct. It is routed through member states and implementing organisations, creating layers of institutional distance that make accountability structurally difficult to establish. People in transit face detention, violence, and pushback before they can make asylum claims. The infrastructure that the EU helps build does not protect them. In many documented cases, it places them in greater danger.
The second channel runs through research and innovation funding. Horizon Europe has awarded grants to Israeli companies whose technologies have been documented to have military applications. The European Defence Fund has channelled significant resources to companies with direct ownership links to Israeli weapons manufacturers. The European Investment Fund has invested, through fund-of-funds (FoF) structures, in Israeli surveillance and spyware companies. Horizon Europe funded Xtend, which was subsequently contracted by the Israeli Ministry of Defence to supply thousands of assault drones. The European Defence Fund channeled over 15 million euros to Intracom Defense, a company 94.5 percent owned by Israel Aerospace Industries (IAI). The European Investment Fund committed 21.2 million euros to a fund that invested in Paragon Solutions, an Israeli spyware company whose tools have been used against journalists, activists, and human rights defenders across the region.
Direct exports, minimal safeguards
The third channel is the most straightforward: direct commercial exports. European companies sell facial recognition systems, biometric tools, drone components, and smart city technologies to governments across WANA with no binding obligation to conduct human rights due diligence before or after the sale.
On paper, many of these technologies are described as civilian. In practice, the line between civilian and military use is almost always blurred, particularly in authoritarian contexts and active conflict zones. A facial recognition system described as an urban management solution functions as an instrument of mass surveillance in the hands of a government with no judicial oversight. A targeting system described as AI-assisted intelligence becomes something else entirely when it generates lists with documented inaccuracies and minimal human oversight in the decision chain.
In Gaza, that something else has a name. High-risk AI targeting systems have been deployed in conditions that violate core principles of international humanitarian law. Approximately 67,000 deaths have been officially reported, the majority civilians. A study published in The Lancet found that life expectancy in Gaza was halved within the first year of the war. The International Court of Justice has found that Israel’s conduct involves plausible acts of genocide. And yet, in 2020, Frontex awarded 100 million euros in contracts to two Israeli companies, IAI and Elbit Systems, to operate surveillance drones over the Mediterranean. Both platforms have been actively deployed by the Israeli military in Gaza. Europe accounted for 45 percent of Israeli arms exports in 2024, a nearly 20 percent increase from 2023, coinciding with the escalation.
Political inertia despite mounting evidence
In September 2025, the European Commission itself acknowledged, in response to a parliamentary question, that it had come to the conclusion that Israel is violating human rights and humanitarian law. A proposal to partially suspend Israeli entities from Horizon Europe was put forward in July 2025. As of today, it has still not been approved by the Council. Some member state governments continue to block it.
This is not a knowledge problem. The evidence is not in dispute. It is a political problem, one of institutional inertia, economic interest, and the absence of binding mechanisms that would force a different outcome.
The path forward is not complicated to describe, even if it is politically difficult to achieve. The AI Act must be extended to cover exports: systems that are prohibited or classified as high-risk within the EU can currently be sold freely to third countries, a fundamental gap that the regulation does not address. Binding human rights due diligence must apply to all exports of AI and dual-use technologies, regardless of company size. The CSDDD was significantly weakened in November 2025 when the European Parliament raised the employee threshold to 5,000, exempting the vast majority of European technology companies.
Migration agreements must be subject to independent, public human rights impact assessments before signature, not after. Moreover, the European Union must urgently reassess Israel’s participation in Horizon Europe, including the eligibility of Israel’s entities as beneficiaries, in light of their human rights conduct and in line with the EU’s own position, values, and legal obligations. The fundamental challenge is visibility. These funding flows, procurement decisions, and technology transfers operate largely out of public view. The further the distance between a decision made in Brussels and its consequences on the ground, the easier it is to avoid accountability. That distance is not accidental. It is a design feature.
Closing it, making the connections readable, specific, and public, is what accountability requires. It is also, for the communities living under the infrastructure Europe helps build, what survival sometimes depends on.