- The Netherlands has started real-world testing of its national GPT-NL AI model
- GPT-NL is designed as a European alternative to Big Tech systems
- The project focuses on practical uses in the public sector, including government communication and municipal assistants.
The Netherlands is trying to build a national model of artificial intelligence that is not controlled by Silicon Valley. The country has developed its GPT-NL model over the past two years. The model is now starting to move beyond the laboratory and to be tested in the real world.
A partnership between Dutch government agencies and research organizations created GPT-NL. The idea was to focus less on viral demos and more on practical deployment within government agencies.
GPT-NL positions itself as infrastructure rather than a consumer chatbot competing for attention. If it works, GPT-NL will prove that an AI system can operate within European legal frameworks and public sector expectations without relying entirely on foreign companies. Europe already relies heavily on non-European cloud services, office software and AI systems. Proponents of GPT-NL argue that dependence creates a strategic weakness.
Institutional AI
Five organizations have begun feasibility studies and plan to expand and potentially launch commercially later this year. One of the first pilots concerns Gem, a virtual assistant already used by nearly thirty Dutch municipalities. The feasibility study examines whether GPT-NL can improve the quality of answers citizens receive when asking Gem questions.
Another pilot focuses on a government writing assistant designed to help civil servants write clearer letters. This may sound less glamorous than image generation or AI video tools, but it touches on a very real problem in public administration. Official communication about benefits, debt and social services is often dense enough to cause confusion among those who benefit from them. GPT-NL is being tested to see if it can make these interactions more understandable.
The Netherlands Forensic Institute uses GPT-NL for its work, refining it on forensic datasets to improve the classification of huge volumes of investigative evidence. TNO itself is also testing GPT-NL internally for sensitive projects where commercial AI systems may have privacy or security issues.
Anti-Silicon Valley AI
Perhaps the most striking thing about GPT-NL is how it was formed. As major AI companies face growing legal battles over copyrighted training data, GPT-NL has entered into licensing deals with Dutch news publishers spanning newspapers, broadcasters and online media platforms across the country. According to the project, this is the first AI initiative in the world to enter into paid collective agreements with all major publishers in a single market.
This achievement is important because relations between journalism and AI companies have become increasingly hostile. The editors claim their work was scraped without permission and reused in systems capable of directly competing with the original reporting.
GPT-NL’s licensing terms are publicly documented, editors are compensated, and technical safeguards are intended to prevent users from ripping licensed content directly via prompts.
Yet the project faces the same cost-of-scaling reality that almost all AI initiatives face. GPT-NL’s 25 developers and budget are tiny by AI standards. This creates an underlying tension to the optimism surrounding the project. GPT-NL appears usable for institutional use, but continuing to improve the model while keeping up with global AI development will require sustained funding and policy support.
Yet there are only a few significant AI challengers outside of America’s largest companies. The Netherlands is effectively testing whether there is another way forward, one focused on public institutions, negotiated copyright agreements, and local control of data infrastructure.
Follow TechRadar on Google News And add us as your favorite source to get our news, reviews and expert opinions in your feeds.




