Stories of human-like dolls yearning to become real people turn up everywhere. Pinocchio wants to be a real boy. The robot child in Spielberg’s A.I. wants to be loved like a human son. The story keeps getting retold because people assume the trajectory is obvious. Build something that looks human, keep improving it, and one day the copy becomes indistinguishable from the original.
What’s happening on the ground is stranger than that. At CES 2026, Boston Dynamics’ Atlas demonstrated wrists that bent backward and a torso that spun a full 180 degrees. Elsewhere, humanoid robots are beginning to diverge in even more striking ways. Some can swap their own batteries by reaching both arms behind their backs. Others walk on reverse-jointed legs. The human silhouette is still there, but the movements inside it have gone somewhere else entirely.
There’s an obvious objection here. Hasn’t copying nature worked before? Sometimes. Gecko toe pads gave engineers the idea for dry adhesives. Sharkskin texture showed up in competitive swimsuits. But in both cases, engineers borrowed the physics underneath, not the shape. The ones who tried to copy natural forms wholesale usually hit a wall.
For centuries, people tried to build ornithopters that flapped like birds, but none became a practical path to human flight. The Wright brothers got off the ground not because they simply imitated, but because they moved beyond flapping and focused on the principles of lift and control.
If evolution has spent millions of years refining a design, why don’t engineers just copy it? That question went to the Hubo Lab at KAIST. The lab built HUBO, the robot that won the 2015 DARPA Robotics Challenge, and today it’s led by Prof. Park Hae-won. His team’s recent work gives a sense of the range. Humanoid legs that sprint at 12.6 kilometers per hour. A quadruped robot that walks straight up vertical walls. A one-legged hopper that launches into mid-air somersaults and lands on the same leg.
From the center of the back row, clockwise Hae-Won Park, Dongyun Kang, Hajun Kim, JongHun Choe, Min-Su Kim
Image: KAIST
Mimicking nature is not always the right answer.
At 12.6 kilometers per hour, a person has to break into a run. A robot built by Prof. Park Hae-won’s team at KAIST can sprint at that speed on two legs. It glides through motions that look like Michael Jackson’s moonwalk and picks its way over rough terrain with a duck-like waddle.
One place to start is biology. Roboticists have been borrowing nature’s tricks for decades. Prof. Park’s robots do look like they come from that tradition. But he works the other way around. Instead of studying an animal to build one, he picks a problem and builds a machine to solve it.
“If you’re developing technology for high-speed movement, wheels can be an efficient choice,” Prof. Park said. “There’s no need to mimic the motion of a cheetah.”
A car on wheels outruns a cheetah. Evolution never set out to build the fastest runner. It built the one most likely to survive.
“Studying natural organisms gives us a sense of the level of performance that can be reached when something is well designed,” Prof. Park said. “It serves as a useful reference for setting direction during research and development.” He added “It’s important to view nature as one reference point. Rather than replicating it directly, it’s more appropriate to use it as a source of ideas.”
Humanoids face the same question. A human body runs on muscles, tendons, and chemical energy. A robot runs on metal frames, motors, and electricity. To copy human movement faithfully you’d need artificial muscles, but motors still tend to outperform commercially available artificial muscles in many practical metrics. So why handicap a robot by forcing it to move like a body it doesn’t have?
MARVEL, a quadruped robot from Prof. Park’s lab, was designed for grimmer work. Researchers wanted a robot that could move freely across the steel structures of shipyards, bridges, and large storage tanks. Places where maintenance crews risk fatal falls.
Gecko feet or insect claws might sound like the right model for a wall-climbing robot. But real industrial steel is rusted, layered in old paint, and caked with grime. Gecko-style adhesion would likely struggle to hold heavy equipment on surfaces like that.
Instead, Researchers built MARVEL with electro-permanent magnets in its feet. Conventional electromagnets drain power continuously to stay on. Electro-permanent magnets work differently. A brief electrical pulse rearranges the internal alignment of the magnet’s poles, switching the grip on or off. MARVEL’s feet lock and release in about five milliseconds.
Once the magnets engage, the wall itself becomes the robot’s ground. Three legs stay anchored while the fourth steps forward. MARVEL travels at 0.7 meters per second on vertical walls and at 0.5 meters per second while hanging upside down from a ceiling. Its adhesive force reaches nearly 54 kilograms, which is enough to carry not just its own weight but also heavy tools.
“If you approach a shipyard robot from a biomimetic perspective, you might conclude that it should resemble a human worker and handle tools the same way,” Prof. Park said. “Ultimately, what matters is designing a system that fits the working environment and the task at hand.”
AI alone cannot build a perfect robot.
Designing the body is only half the problem. AI and reinforcement learning have changed how robots learn to move, but what works in simulation still has to hold up on real hardware.
Prof. Park’s team trains its robots through reinforcement learning. The AI controls the robot’s body and figures out how to walk by trial and error, falling and getting back up the way a toddler does. Doing that thousands of times on real hardware would take forever. So researchers train in simulation instead.
Inside the simulation, Prof. Park’s team runs roughly 400 copies of the same robot at once. Each copy falls and recovers under different conditions, and what all of them learn feeds into a single AI network in real time. Time itself can be compressed. What would take about a year of physical practice fits into roughly four hours on a high-performance computer. Prof. Park said half a day of reinforcement learning is enough to get a robot walking.
The catch is that a robot trained in simulation doesn’t always survive contact with reality. A robot that tumbles like a gymnast on screen can lose its balance and topple the moment it’s placed on a real floor. Roboticists call this the sim-to-real gap. Simulations can’t capture every wrinkle of real-world physics, and the differences are enough to throw off an AI that learned in a simpler world. Closing that gap is where the KAIST team’s hardware expertise comes in.
One approach Researchers took was to make the real robot behave more like its simulated twin. A big reason AI struggles to control a physical robot is friction in the joints. Conventional robots use off-the-shelf reducers with high gear ratios to amplify motor output. That gives the robot powerful force. At the same time, internal friction makes everything stiff, like pedaling a bicycle stuck in high gear.
“In a gear system with a high reduction ratio, it’s very hard to force it to turn from the outside,” Prof. Park said. “If you attach a linkage and strike it with a hammer, the resistance is so intense that the gear teeth could shatter.”
Most simulations don’t account well for that friction. An AI that learned to walk in a near-frictionless virtual world loses its balance the moment it hits the stiff resistance of a real joint. So Prof. Park’s team built its own actuator that cut the gear ratio to roughly one-tenth of conventional levels while boosting the motor’s own output. It’s a quasi-direct drive design, a concept first proposed at MIT. Less friction in the hardware meant the real robot moved more like the simulated one. After the adjustment, AI’s training actually carried over.
KAIST team also worked the problem from the other direction. Instead of making the hardware match the simulation, they made the simulation match the hardware. Because Prof. Park’s team designed and built its own motors, they had detailed data on how those motors actually behave.
That data matters. Most simulations assume torque stays the same no matter how fast the motor spins. Real motors don’t work that way. Spin faster, available torque drops. Slow down, available torque climbs. Training an AI on the simplified version will drive it to push the hardware beyond its limits. Prof. Park’s team fed their actual torque-limit curves into the training, so the AI learned where the motor’s ceiling was and stayed under it.
Where all of this comes together is KAIST’s hopping robot. The whole machine is one leg. No arms, no second foot to catch itself. That kind of balance problem is brutal to solve. At the moment Prof. Park had already gotten quadruped leg robot walking to work. Instead of moving to two legs next, he went straight to one. Because If the algorithm can handle the hardest case first, then two legs won’t be a problem.
Researchers loaded everything about the real robot into the simulation. Its shifting center of gravity, its inertia, and the physical limits of its actuators. From there they ran nearly the same reinforcement learning algorithm they’d used for the quadruped. The AI figured out how to balance on one leg. It started jumping. Before long it was doing mid-air somersaults, landing cleanly each time.
“Building the hopping robot confirmed that our reinforcement learning algorithm and hardware design can be applied under a wide range of conditions,” Prof. Park said. “It gave us an opportunity to explore how our motor technology and reinforcement learning techniques might extend to the development of robots in many different forms.”
Prof. Park doesn’t buy the idea that software can solve everything. He’s watched junior researchers spend days debugging code when the real problem was a loose screw or a broken solder joint. When a robot won’t walk, people reach for the algorithm first. They tweak the parameters, rerun the simulations, rewrite the control logic. Meanwhile the actual fault is sitting right there in the hardware. No amount of code will tighten a screw. Hardware knowledge isn’t going away just because AI got good.
“No matter how sophisticated the control technology, there are limits to what can be achieved if the hardware cannot keep up,” Prof. Park said. “In robot development, control and hardware are both critical. Neither can be considered in isolation.”
Can humanoid robots become part of our everyday lives?
The money pouring into humanoid robots right now is staggering. But plenty of technologies have looked just as promising and gone nowhere. Honda spent over two decades on ASIMO before quietly retiring it. A robot that walks across a stage at a trade show is not the same thing as a robot that survives a shift on a factory floor.
Prof. Park’s humanoid is being built for the factory floor. The target payload is 25 kilograms or more. Most humanoids on the market top out well below that. He chose that number because of where South Korea is right now. The country runs one of the world’s largest manufacturing sectors, but the workforce is graying fast. Young people aren’t lining up for welding jobs or assembly-line shifts. The slack is being picked up by older skilled workers and foreign laborers, and there aren’t enough of either. A robot that can only carry light objects is useless in that environment. The quasi-direct drive actuators and custom motors his researchers have been building exist for exactly this kind of work.
The factory floor isn’t the only possible market, though. Prof. Park brought up drones. For decades only the military and a few infrastructure inspectors bothered with them. Then YouTube creators started wanting aerial shots and went looking for something that could fly a camera. Drone companies shipped a cheap quadcopter with a decent camera mount. Within a few years a consumer drone industry had grown up around a need that barely existed before. Prof. Park thinks humanoids could go the same way. The use that actually drives adoption might be one nobody in the industry has imagined yet.
At the close of the interview Prof. Park said, “I believe robots should complement people, not compete with them. My hope is that robots will ultimately be used to enrich people’s lives and free them to pursue more fulfilling work.”
The story was produced in partnership with our colleagues at Popular Science Korea.
The post Do humanoids dream of becoming human? appeared first on Popular Science.
from Popular Science https://ift.tt/u21BAzj

0 Comments