AI is leaving the cloud and entering the physical world. Embodied AI – artificial intelligence in a robot’s body – is the next technological (r)evolution. It will transform factories, warehouses, hospitals, and eventually our homes as well.

We have grown accustomed to thinking about AI as a “brain in a jar” – something that exists in the cloud, processes information, and generates text or images on a screen. Yet more and more often we hear about robots and other devices that use artificial intelligence. As a result, it can be said that AI is acquiring the ability to move, see, touch, and perform operations that affect our physical space.

This is exactly what Embodied AI is about. It is the integration of advanced artificial intelligence – most often several different models – with a physical system or device: a robot, an autonomous vehicle, or a drone.

This is not just “AI in a box”, because the key element is a continuous feedback loop with the environment. The robot sees something (perception), decides what to do (reasoning), and does it (action). Its action immediately changes the environment, generating new data for its “senses”. It learns through physical experience.

Embodied AI

Embodied AI is a step toward enabling artificial intelligence to operate in the real world, breaking through the limitations of a purely digital environment.

Some researchers see the acquisition of such capabilities as a key step toward true AGI (I don’t – not entirely – but that’s a separate topic). Large language models (LLMs) have mastered language, but they lack physical grounding, an “intuition” for the laws of physics, and common-sense reasoning about the world. The physical world provides context that cannot be learned from billions of pages of the internet.

The Anatomy of Embodied Intelligence

Physical AI requires several key components that together form a cycle of perception and action. Multiple scientific disciplines come together here, including engineering and cognitive science. I have divided everything into three categories.

Senses (multimodal perception)

For AI-based robots – whatever form they take – to perceive the world at all, they need more than just a simple camera. If a robot is to truly “understand” its environment, it must combine data from multiple sources. It can use:

  1. RGB-D cameras – allowing it to see both color and depth,
  2. LiDAR technology – providing precise distance measurements,
  3. microphones – enabling it to interpret not only speech but also other ambient sounds,
  4. advanced tactile sensors – informing it about pressure, texture, and slippage (for example, so the robot does not crush an egg),
  5. gyroscopes, accelerometers, and other sensors used to determine the robot’s position, as well as the position of its individual parts.
Advanced sensor technologies for robots

Understanding the world by robots requires the integration of different forms of perception, which is made possible by advanced sensors.

The brain (AI models)

In recent years, a major breakthrough has occurred here, though in my view there is still a long way to go. Foundation models (such as those behind LLMs) give robots the ability to reason at a high level. A new standard is emerging in the form of vision–language–action models (VLA models), which directly connect what a robot sees with a language instruction and a concrete motor action.

An interesting example from Poland: the startup Pathway has developed a revolutionary AI architecture called Dragon Hatchling (BDH). It is inspired by the biological structure of the brain – instead of standard transformer blocks, it uses a network of neurons and synapses that learn according to the so-called Hebbian rule (neurons that fire together, wire together). This may help solve problems such as continuous learning in AI agents without forgetting previous lessons.

The body (mechanics and electronics)

The “brain” needs “muscles”. For years, robots were rigid and clumsy. They relied on stiff, metal joints and electric motors that worked perfectly for repetitive car-body welding, but terribly for handing someone a glass of water.

Embodied AI, or embodied artificial intelligence

Today, there is intense work underway on improving the fluidity and finesse of movement, and engineers are increasingly turning to biomimicry – imitating solutions developed by evolution. Some even speak of a new field: morphological computation. The idea is that the physical design of a robot’s body should itself be an element of its “intelligence”. For example, a gripper made of a flexible material will adapt to the shape of an apple without complex processor calculations, whereas a rigid, metal manipulator would have to perform millions of computations to avoid crushing the fruit.

Here again we have a Polish contribution: the Wrocław-based startup Clone Robotics is attempting to recreate human anatomy instead of using traditional electric motors. They are building synthetic muscles (Myofiber) powered by water. The artificial fibers contract 30% faster than human muscles and provide remarkable strength and smoothness of movement. Such a hand can crush concrete, and moments later gently grasp a grape.

The Training Ground

Robots cannot learn everything “live” in the real world. Imagine a humanoid robot learning to walk by trial and error – every fall is a risk of a costly failure. There are already plenty of videos showing, for example, an early version of a robot crashing into a mirror at home or spilling scrambled eggs on itself and then slipping on them. While this may be funny, in other contexts it could lead to serious tragedies – for instance, if an autonomous car were learning how to react to pedestrians only on public roads. Here, mistakes are unacceptable. Learning in the physical world is simply too slow, too expensive, and too dangerous. That is why engineers have moved the learning process into the virtual world.

A large portion of training takes place in physics simulators such as NVIDIA Omniverse. For this purpose, so-called digital twins are created. One example is BMW, which uses a digital twin of its entire factory to safely train robots and optimize their operation before any machine is deployed on the production floor. In simulation, a robot can “experience” millions of scenarios within a few hours and learn from mistakes that cost almost nothing.

Simulation of robot development in a safe environment

Simulators allow robots to develop without the risk of costly errors and the dangers of the real world.

However, simulation is never perfect. The greatest challenge is transferring skills acquired in this sterile, digital environment to the dirty and chaotic reality. Engineers refer to this as the “sim-to-real” gap. To bridge it, clever techniques such as randomization are used. In simulation, lighting, floor textures, or friction are deliberately varied so that the robot does not memorize the appearance of a specific environment, but instead “understands” general principles of physics and adaptation.

A Robot on the Payroll

Forecasts for 2030 point to a market worth over USD 23 billion, growing at nearly 40% per year. Where is Embodied AI already operating, or where will it operate in the near future?

Manufacturing and logistics

Warehouses are already an everyday environment for robots. But now machines are entering them that “understand” what they are picking up.

It is worth noting that the Polish company Nomagic is one of the leaders in automation. Their robot, named Richard, works among others for the fashion giant Zalando. The machine can independently pick, scan, and sort thousands of products per day (on average 10,000 items daily!). Interestingly, Zalando has invested in the Polish startup, seeing it as the future of its logistics.

There is also a lot happening in car factories. The company Figure has signed an agreement with BMW and is testing its humanoid robots (the Figure 01 model) at the factory in Spartanburg. For now, these are feasibility tests, but the vision is clear: robots are meant to perform tasks that are ergonomically difficult or dangerous for humans.

Embodied AI, or embodied artificial intelligence

Construction

I have already written about the construction sector specifically on social media. I pointed out that for a long time we have observed stagnating productivity and a chronic shortage of labor in this field. Robots – not only humanoid ones – may be the remedy.

In the coming years, they will act as assistants: preparing tools, cleaning construction sites, handling painting, or unloading materials. In the longer term (over 10 years?), once their mobility in difficult terrain improves, they may be able to install drywall panels (prototypes already exist in Japan), pull cables, and even pour concrete.

The most sensible use seems to be in large infrastructure projects, where tasks are repetitive but the environment is too dangerous or exhausting for humans.

Healthcare

Medical robotics is not just the famous Da Vinci surgical system, which gives doctors superhuman precision. Embodied AI is entering the area of direct care.

Robots are being built and improved to assist in the physical rehabilitation of patients, monitor vital signs, and support older people in everyday activities, relieving medical staff. It is also worth mentioning diagnostics – AI systems are being “embodied” in devices such as autonomous ophthalmic cameras that can examine eyesight and detect diseases in real time.

Embodied AI in industry and robotics

Embodied AI may take a form that looks nothing like a humanoid, while still helping in daily life and in the process of recovery.

Other sectors

The transformation also extends to agriculture, where autonomous tractors, crop-monitoring drones, and fruit-harvesting robots can help increase efficiency and reduce food waste. Here again, the main beneficiaries of robotization will be large farms.

From field to table… We also see robots in restaurants, delivering ordered food to tables. I believe we will increasingly see them in stores as well: greeting customers and offering assistance, packing products, and autonomously managing inventory in the back. (At the same time, I would argue that human service will become a marker of a certain kind of luxury.)

Finally, robots such as the famous Spot from Boston Dynamics are deployed where humans should not go: inspecting hazardous installations, detecting mines, or responding to natural disasters.

From Roomba to Rosie

The place most often associated with humanoid robots is our home. Even in The Jetsons, the housekeeper Rosie was – one could say – part of the family. The consumer robotics market has enormous potential, but also the highest barriers to entry.

Why is it so difficult? A factory is predictable. A home, by contrast, is full of soft objects, clutter, pets, and children. A robot that can reliably clean a child’s messy room (and not step on LEGO bricks!) would be the Holy Grail of this technology.

Robots in everyday home life

The future of consumer robotics lies in overcoming the complexity of everyday life at home. I’ve just been reminded that I need to tidy up rubber bones and balls so I don’t trip over them at night…

Another possible future is robots for elder care. In aging societies, they will provide companionship, remind people to take their medication, and help monitor health conditions. I believe that as a result, they will help seniors live independently for longer.

What Stands in the Way to Embodied AI?

The vision is impressive (and for some probably frightening), but the road to it is full of obstacles. Briefly, a few examples:

  • Advanced humanoid robots are still very expensive. For them to become widespread, their price must fall to the level of a car (around USD 20–30 thousand). For companies, a solution is becoming the RaaS model (Robotics as a Service), that is, “renting” a robot’s labor by the hour.
  • Who is responsible for an accident caused by an autonomous robot? The hardware manufacturer, the AI developer, or the owner? This “legal vacuum” slows down deployment.
  • What will human development look like when part of the world has access to robots and part does not? This is potentially a cause of a growing divide between people.
  • Widespread automation raises justified concerns about jobs. However, history shows that technology more often changes the nature of work than eliminates it.
Challenges in adopting advanced robotics

Technological, legal, and social challenges stand in the way of the widespread adoption of advanced robotics. Interestingly, the pace of development may mean that worse and older robots will themselves be condemned to… unemployment.

Embodied AI – a Few Words at the End

It is hard to treat Embodied AI as distant science fiction, although a healthy dose of common sense is still needed. Many demonstrations tend to be “enhanced” – robots that appear fully autonomous in viral videos are, in reality, sometimes remotely controlled by an operator behind the scenes (so-called teleoperation). That said, this does not change the fact that the revolution is accelerating.

For business leaders in certain sectors, this transformation means the need to act. This is the moment to move beyond theorizing and start investing in digital infrastructure and pilot programs, identifying processes where the return on investment will be highest. Waiting for the “perfect robot” may mean losing ground to the competition.

Robotics revolution and human adaptation

The coming robotics revolution requires openness to change and readiness to adapt. Let’s be honest: we have to adjust education.

And what about all of us? How do we find our place in this new reality?

The key is adaptation. Instead of fearing that machines will replace us, we should focus on skills that AI cannot easily replicate: creativity, critical thinking, emotional intelligence, and ethical judgment.

In the world of the future, we will no longer be just machine operators. We must prepare for a new role – managing their work, overseeing it, and solving problems that exceed the capabilities of algorithms.

If you want to stay up to date on how AI is changing the physical world, I invite you to subscribe to the newsletter.

I invite you to sign up for the AI and Management newsletter. This way, you won’t miss any article on my blog.  Sign up