The race for dominance in artificial intelligence has long since stopped resembling an academic contest between brilliant minds and research labs. It is a clash over economic influence, military advantage, control of supply chains and – in a broader perspective – the shape of the future technological order. When I look at what is happening in the US, China and Europe, I see three different philosophies of AI development. Each is rooted in a different approach to capital, regulation, infrastructure and engineering culture.

This article is an attempt to bring some order to the picture of technology, research and innovation related to AI. I should warn you that I am forced to rely on far-reaching simplifications. I will show how the various “camps” differ in their approach to model architecture, open-source strategies and ways of using hardware. I will also try to sketch, in broad terms, what the situation looks like in the area of work/workers and scientific research.

Europa, Chiny i USA - trzy różne podejścia do rozwoju AI kreślą przyszłość światowego porządku technologicznego.

Europe, China and the US – these three different approaches to AI development, though of course not the only ones – are shaping the future of the global technological order.

This will be a long text, but the topic deserves it. It is part of a series accompanying my research on LLMs (a report from it should appear on the site on 9 March). In the remaining two texts I will look at how work on AI is funded and how different geographical areas approach AI regulation and supporting development through public institutions. In the articles in this series I leave out other countries in the Americas, other countries in Asia, Africa and Australia with Oceania. This is, of course, a shorthand and absolutely does not mean that nothing is happening there in the field of artificial intelligence.

AI architecture: scaling versus efficiency

Let’s start by trying to characterise the different geographical areas so we can better understand what makes each of them distinctive.

The US and “scaling laws”

American companies and labs – OpenAI, Google DeepMind, Anthropic – have for years been consistently pursuing a strategy based on so-called scaling laws. Put simply: the more parameters, the more data and the more compute, the better the models. For several years this hypothesis worked almost like a law of physics.

The US, China and Europe have unique philosophies of building AI models that reflect their specific resources and goals.

AI development strategies in the US, China, and Europe

Models such as GPT-5.2, Gemini 3.1 and Claude Opus 4.6 set the SOTA (state of the art) standard across many benchmarks – from mathematical reasoning through code generation to multimodality. Americans still dominate the segment of the largest closed models, whose architectural details and training data remain trade secrets.

With the most recently released models, I have the impression that the emphasis has been mainly on three aspects:

  • multimodality – integrating the handling (mainly reading) of text, images, audio and video in a single model;
  • huge context windows – allowing analysis of entire code repositories, hundreds of pages of documents or long recordings;
  • the ability to be used in agentic scenarios, which are meant to deliver not so much an answer as a broadly understood outcome corresponding to the assigned task.

On top of that, of course, many other improvements are being introduced and there are also tools not directly related to the models themselves, designed to make it easier for us to tap into their potential – the canvas in ChatGPT, for instance.

In practice, the model stops being a “chatbot” and becomes something like an operating system for knowledge. For business, this is a huge shift – because you can analyse whole collections of legal documents, sizeable bodies of project documentation or application code, all without having to split them into chunks. And most importantly – you can carry out tasks on the basis of that data.

The American strategy is simple: if the scaling law works, exploit it to the limit. In that sense, the US is trying to reach AGI through brute computational force. (A side note: electricity may be the challenge, but I’ll write more about that in the next piece.)

But is this path not approaching a plateau?

More and more researchers – including from American labs – suggest that further increases in parameter count bring diminishing returns in quality. On top of that, a linear increase in context window size, broadly speaking, leads to an exponential rise in the compute required. Moore’s law in the AI domain is no longer as generous as it once was.

This is the point at which the second philosophy comes into play.

China – the masters (?) of optimisation

Sanctions restricting access to the newest NVIDIA chips – as far back as I can remember, it started in particular with the H100 – were meant to slow the development of Chinese AI. In practice, however, they became a catalyst for innovation.

In 2025, the DeepSeek R1 model shocked the community, throwing down the gauntlet to OpenAI at significantly lower training and inference costs. It deployed the Mixture of Experts (MoE) architecture at scale, in which only some parameters are activated for a given token. For example, out of 600 billion parameters, only 20-30 billion might be active at a given step. The rest remain “asleep”. The effects are:

  1. lower compute usage;
  2. lower costs;
  3. greater energy efficiency.

In addition, the Chinese are experimenting with techniques such as Multi-Head Latent Attention (MLA), which reduce memory load and speed up computation.

DeepSeek pushed API prices down to a level that was almost symbolic compared with American models, forcing a global price war. For companies, this means LLMs become more accessible. And if the cost of inference and content generation falls, AI can be deployed across a larger number of operational processes.

China and effectiveness of AI

For China, the constraints became a reason to look for other routes to the goal and, paradoxically, in the long term they may become the best at using compute efficiently. In fact, they are already starting to prove it.

China shows that hardware constraints do not have to mean stagnation. Sometimes they force greater creativity. And after all, we Poles have more than once proved that innovation can spring from scarcity 😉 Speaking of Poland and Europe…

Europe – “small language models” and data quality

I try to look at Europe with optimism, but I wouldn’t say we are on pole position in the AI race.

Europe is not competing on parameter counts and that may well be its greatest strength.

France’s Mistral AI has shown that models at 7B or 8x7B can compete with giants in specific applications – if they are trained on high-quality data and tuned appropriately.

The European strategy focuses on:

  • getting the most out of high-quality training datasets;
  • open weights/transparency of parameters;
  • models that can be run locally (on-premise).

For many companies – especially in finance, healthcare or the public sector – this makes it possible to operate without sending data to an American cloud, which, viewed through the lens of regulations such as the GDPR, matters enormously.

EU and Small Language Models

Europe’s concern for values often translates into quality. It would be excellent if we could translate that into AI solutions and the industries of the future. At the same time, we need a great deal of courage and a real push to make it happen.

And this brings us to another feature of the European approach. Europe prioritises control, privacy and specialisation. It is not trying to win a race of sheer scale, but is building tools better suited to an already regulated market and it regulates AI itself. (I’ll write a fair bit about this in the piece on the institutional approach to AI.)

Will that be enough to achieve technological sovereignty? It depends. Not only on model architecture, but also on hardware.

Open source and a reversal of poles

Just a few years ago, the West was the symbol of openness. TensorFlow, PyTorch, countless repositories – everyone knows that the whole world worked on many of these solutions, but I will risk the claim that all of this built a global ecosystem around American standards. Today we are seeing a reversal of roles.

Leading US companies are increasingly keeping the technical details of their models closed. Meanwhile Chinese firms – Alibaba (Qwen), DeepSeek and others – are releasing AI models under licences such as Apache 2.0 or MIT.

changes in open-source in AI

Changes in how we define what “open source” means may affect the AI industry and the way and extent to which this technology is used.

Qwen 3.5 ranks highly, often outperforming open models from the US (I also refer you to my research). DeepSeek is a very frequent topic of questions during the training sessions I run – it has lodged itself so firmly in the public consciousness. China’s openness strategy is a strategic move, because whoever controls the standard controls the future of interoperability.

Meta is pursuing a different tactic. The Llama series is meant to be an open alternative to Google and OpenAI models (and I should note: Llama’s licence contains restrictions that make it impossible to describe these solutions as fully open). Zuckerberg’s strategy resembles a “scorched earth” approach – lowering barriers to entry in order to undermine competitors’ business models based on expensive APIs.

The result? Innovation in the US is being driven in a distributed model, but at the same time the boundary between “open” and “closed source” is becoming increasingly blurred.

The OSI is working on a new definition of Open Source AI, which may exclude most of the current models described as “open”. The dispute over the definition is, in essence, a dispute over power.

Silicon cannons

Google is developing TPUs, Amazon – Trainium and Inferentia, Microsoft – Maia. The big cloud providers are designing their own accelerators to reduce their dependence on NVIDIA. Yes, this may come as a surprise to some, but a company once known for making graphics cards is setting the pace for the most powerful servers used by OpenAI, Anthropic and many other LLM builders.

NVIDIA remains the hegemon thanks to CUDA – a developer ecosystem that has become the de facto standard. Manufacturing chips is one thing. Building tools, libraries and a community around them creates a far greater level of complexity. I don’t really host AI models on my own computer because, in my view, there are better ways to experiment, but when I recently upgraded my hardware I very deliberately chose a mid-range NVIDIA graphics card – precisely because of CUDA. In fact, I did not even consider any others.

hardware innovations in AI

Both the United States and China are investing in innovative hardware solutions to gain an edge in AI performance. And Europe?

As I wrote earlier, lack of access to the latest GPUs forced China to find its own path – building clusters made up of different types of chips, such as older NVIDIA models, the Huawei Ascend 910B and others. This naturally drove the development of domestic microchip technologies, but also software – such as Huawei’s CANN – that can manage a heterogeneous environment efficiently.

The West uses homogeneous clusters. China has to stitch together “patchworks” of hardware. Paradoxically, in the longer term this may bring greater systemic resilience (that is, their models may be able to run well, among other things, regardless of which chips they are running on).

Human capital and the publication war

Without researchers, engineers and entrepreneurs, even the largest data centres remain useless. So if model architecture and access to chips are the fuel in the AI race then people (in corporate speak: talent ;)) are its flywheel. Today, major changes seem to be taking place in how we harness human potential. The structure of academic careers is shifting, the relationship between universities and business is changing, the “geography” of scientific publications is changing and even the language in which knowledge is produced.

The privatisation of science

In the US we are seeing a phenomenon increasingly described as the “mass privatisation of science”. According to Higher Education Quarterly, in 2004 only 21% of AI PhD graduates went into companies (the IT industry, manufacturing, banking etc.). By 2020 it was already around 70%.

human capital and AI race

Human capital is taking on added importance in the global AI race and this is reflected in the tug of war for talent between universities and companies.

Universities, especially American ones, are beginning to act as “talent pipelines” for Big Tech. The most capable doctoral students are snapped up by OpenAI, Google, Anthropic or Meta long before they have the chance to build independent academic teams. The pay and the opportunities are incomparable.

As a result, the private sector, with thousands of powerful chips and access to unique datasets, is taking over the role of leader in fundamental research. Knowledge is increasingly accumulating in closed corporate repositories rather than in open databases such as arXiv.

This leads to a phenomenon that even has a name now: “corporate science closure”. American labs are moving away from publishing detailed papers in favour of high-level technical reports. Overall statistics may suggest a slowdown in innovation, but in reality know-how is shifting from the public domain into a zone protected by intellectual property rights.

The paradox is that it is often Chinese companies that now publish more detailed AI work, for example on MoE architecture or optimising model training. My thesis is that they do this to build global credibility and an ecosystem around their own solutions.

China bets on STEM

While looking for information for this article, I came across a National Science Board report showing that China trains almost twice as many STEM graduates as the US. This disparity may have interesting effects in the long run. Beijing is pursuing a centralised strategy – among other things via the “AI Innovation Action Plan for Colleges and Universities” from (note!) April 2018 – aligning universities’ activity with state and economic plans.

By 2030, China may overtake the US in terms of the number of scientific publications – projected at around 67-68 thousand a year compared with 51-52 thousand in the US – and in the number of patents. But even more interesting is that there has been a “crossing of curves” in terms of quality.

Historically, Chinese papers were sometimes criticised for being incremental. Today, when it comes to key conferences – NeurIPS, ICML, ICLR – the share of Chinese teams is rising noticeably. In computer vision or hardware optimisation they are setting the standards. Their academic system, closely tied to industrial goals, rewards solutions that work in logistics, surveillance systems or autonomous vehicles.

Chiny i ich potencjał naukowy w temacie AI

China is seen as the world’s factory. Can it repeat that success when it comes to the quantity and quality of scientific work? They seem to have said: “hold our coffee – we’ll show you.”.

The United States, by contrast, studies so-called emergent behaviours* at a scale nobody else can match. In that way, it forms the “top of the pyramid”, where a handful of papers each year changes the paradigm for the entire field. China dominates the “long tail” – thousands of publications adapting those discoveries to medicine, transport or agriculture. Fundamental versus applied – both approaches matter. (By the way, I recommend my text on innovation inspired by the book Disruptive Innovations.)

* These are behaviours or characteristics that become visible only at the level of the whole system, even though they are not directly visible in its individual parts. For example, a single ant does not have Google Maps, but a colony can create paths to food, move along them and optimise them efficiently.

Europe – ethics and a “culture of caution”

Europe remains an incubator of talent. Many authors of breakthrough work were educated at European universities. The problem is that commercialisation of innovation often happens off the continent.

The reason is not only a lack of capital. Increasingly, people talk about a “culture of caution”. Europe’s regulatory environment – with the AI Act as its symbol – emphasises safety and ethics. Europe leads in publications on XAI (explainable AI), governance and AI system safety.

EU and careful approach to AI

However, breakthrough work on LLMs requires access to clusters of thousands of GPUs, which European universities typically do not have. Innovation therefore shifts towards mathematical theory and algorithmic optimisation – where the infrastructure barrier is lower.

In my view, though it is probably easy to prove, Europe more often prioritises risk minimisation over experimentation. Investors have approached me, we have grant programmes – but (presumably one should add the word “almost”) in every case I have to demonstrate what I want to do, what exactly I will buy, when and how much I will earn, what technologies will be created. In AI, where in a year we may have access to completely different technologies, that approach makes no sense. So far I have not used any of these options.

It may be a surprise: a continent that co-created the foundations of modern AI struggles to build global champions.

Democratising science through AI

There is one more, less obvious thread. AI tools are democratising science.

Advanced chatbots are now used to edit and proofread scientific texts. For researchers from China, Europe or Latin America, for whom English is not a native language, this levels the playing field. Instead of focusing on style, they can focus on substance. In the longer term, this may increase the citation impact of work from outside the Anglosphere and speed up the global diffusion of research quality. (Remember, we watch out for hallucinations and don’t act thoughtlessly, including by copy-pasting ;))

Incidentally, this is another interesting fact: tools created in the US may contribute to a relative reduction of America’s advantage in publishing.

USA, Europa i Chiny - globalny wyścig w AI i nauce

Different perspectives have always been a driver of science – they have allowed us to challenge less-than-ideal assumptions. If language is no longer a major barrier, scientific progress may accelerate even further.

On the other hand, “strategic education blocs” also seem to be emerging. For example, some regions cooperate with China while others restrict cooperation, which leads to a situation where, instead of an integrated scientific community, we begin to see parallel ecosystems.

The era of romantically understood “open science” – at least in areas of strategic importance for the future of the world – seems to be coming to an end. Or at least we are pressing pause.

What does this mean for Poland and Europe?

Poland has a strong scientific and engineering base and a growing awareness of the need for digital transformation. What is more, we are maturing to the point where we can say boldly that we have nothing to be ashamed of when it comes to human capital. Many leading researchers and engineers from Poland stand behind the successes of America’s large language models and its start-ups. If they have managed to break into the ranks of the absolute top specialists, that means that by creating the right conditions here, at home, we have a chance to replicate their success. I know this is a complex topic – and I will write about it in one of the next pieces – but it is not a lack of talent or willingness that is the problem.

When it comes to the business and political environment, at both national and European level we need:

  • investment in skills and retaining talent;
  • support for fundamental research;
  • building local data centres;
  • regulation that does not stifle innovation;
  • better cooperation between academia and business (including sensible rules that govern it).

People do not follow only the best university. In fact, not even one company or another that we associate with a given country. There are many more factors and it is worth ensuring, for instance, access to compute and infrastructure. But we also need a culture that accepts risk and a sense of real impact (simply).

AI strategies for Poland and Europe

Poland and Europe need well thought-through (that is the most important part), large and bold investments to strengthen their position in the AI industry.

Artificial intelligence should be treated as a tool that supports people, not one that replaces them and their judgement. Europe is trying to impose a certain narrative in this regard. How well it is doing that and – forgive me, because this will sound bad – how profitable it will be for us, will be shown by the next few years, which will probably cement a certain state of affairs for a long time (as happened with social media). I hope it does not end with the narrative and that, as Poland and Europe, we will nevertheless have a growing influence on AI development and that we will use “delays” as 1) a reason to look for other routes when it comes to artificial intelligence and more broadly: 2) a lesson for the next innovations (the space industry?, quantum computers?, B2M?).

This is all for today. As always, I invite you to subscribe to the newsletter.

I invite you to sign up for the AI and Management newsletter. This way, you won’t miss any article on my blog.  Sign up