Feed aggregator



The Modified Agile for Hardware Development (MAHD) Framework is the ultimate solution for hardware teams seeking the benefits of Agile without the pitfalls of applying software-centric methods. Traditional development approaches, like waterfall, often result in delayed timelines, high risks, and misaligned priorities. Meanwhile, software-based Agile frameworks fail to account for hardware's complexity. MAHD resolves these challenges with a tailored process that blends Agile principles with hardware-specific strategies.

Central to MAHD is its On-ramp process, a five-step method designed to kickstart projects with clarity and direction. Teams define User Stories to capture customer needs, outline Product Attributes to guide development, and use the Focus Matrix to link solutions to outcomes. Iterative IPAC cycles, a hallmark of the MAHD Framework, ensure risks are addressed early and progress is continuously tracked. These cycles emphasize integration, prototyping, alignment, and customer validation, providing structure without sacrificing flexibility.

MAHD has been successfully implemented across diverse industries, from medical devices to industrial automation, delivering products up to 50% faster while reducing risk. For hardware teams ready to adopt Agile methods that work for their unique challenges, this ebook provides the roadmap to success.



The Modified Agile for Hardware Development (MAHD) Framework is the ultimate solution for hardware teams seeking the benefits of Agile without the pitfalls of applying software-centric methods. Traditional development approaches, like waterfall, often result in delayed timelines, high risks, and misaligned priorities. Meanwhile, software-based Agile frameworks fail to account for hardware's complexity. MAHD resolves these challenges with a tailored process that blends Agile principles with hardware-specific strategies.

Central to MAHD is its On-ramp process, a five-step method designed to kickstart projects with clarity and direction. Teams define User Stories to capture customer needs, outline Product Attributes to guide development, and use the Focus Matrix to link solutions to outcomes. Iterative IPAC cycles, a hallmark of the MAHD Framework, ensure risks are addressed early and progress is continuously tracked. These cycles emphasize integration, prototyping, alignment, and customer validation, providing structure without sacrificing flexibility.

MAHD has been successfully implemented across diverse industries, from medical devices to industrial automation, delivering products up to 50% faster while reducing risk. For hardware teams ready to adopt Agile methods that work for their unique challenges, this ebook provides the roadmap to success.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

ICRA 2025: 19–23 May 2025, ATLANTA, GA

Enjoy today’s videos!

NASA’s Mars Chopper concept, shown in a design software rendering, is a more capable proposed follow-on to the agency’s Ingenuity Mars Helicopter, which arrived at the Red Planet in the belly of the Perseverance rover in February 2021. Chopper would be about the size of an SUV, with six rotors, each with six blades. It could be used to carry science payloads as large as 11 pounds (5 kilograms) distances of up to 1.9 miles (3 kilometers) each Martian day (or sol). Scientists could use Chopper to study large swaths of terrain in detail, quickly – including areas where rovers cannot safely travel.

We wrote an article about an earlier concept version of this thing a few years back if you’d like more detail about it.

[ NASA ]

Sanctuary AI announces its latest breakthrough with hydraulic actuation and precise in-hand manipulation, opening up a wide range of industrial and high value work tasks. Hydraulics have significantly more power density than electric actuators in terms of force and velocity. Sanctuary has invented miniaturized valves that are 50x faster and 6x cheaper than off the shelf hydraulic valves. This novel approach to actuation results in extremely low power consumption, unmatched cycle life and controllability that can fit within the size constraints of a human-sized hand and forearm.

[ Sanctuary AI ]

Clone’s Torso 2 is the most advanced android ever created with an actuated lumbar spine and all the corresponding abdominal muscles. Torso 2 dons a white transparent skin that encloses 910 muscle fibers animating its 164 degrees of freedom and includes 182 sensors for feedback control. These Torsos use pneumatic actuation with off-the-shelf valves that are noisy from the air exhaust. Our biped brings back our hydraulic design with custom liquid valves for a silent android. Legs are coming very soon!

[ Clone Robotics ]

Suzumori Endo Lab, Science Tokyo has developed a superman suit driven by hydraulic artificial muscles.

[ Suzumori Endo Lab ]

We generate physically correct video sequences to train a visual parkour policy for a quadruped robot, that has a single RGB camera without depth sensors. The robot generalizes to diverse, real-world scenes despite having never seen real-world data.

[ LucidSim ]

Seoul National University researchers proposed a gripper capable of moving multiple objects together to enhance the efficiency of pick-and-place processes, inspired from humans’ multi-object grasping strategy. The gripper can not only transfer multiple objects simultaneously but also place them at desired locations, making it applicable in unstructured environments.

[ Science Robotics ]

We present a bio-inspired quadruped locomotion framework that exhibits exemplary adaptability, capable of zero-shot deployment in complex environments and stability recovery on unstable terrain without the use of extra-perceptive sensors. Through its development we also shed light on the intricacies of animal locomotion strategies, in turn supporting the notion that findings within biomechanics and robotics research can mutually drive progress in both fields.

[ Paper authors from University of Leeds and University College London ]

Thanks, Chengxu!

Happy 60th birthday to MIT CSAIL!

[ MIT Computer Science and Artificial Intelligence Laboratory ]

Yup, humanoid progress can move quickly when you put your mind to it.

[ MagicLab ]

The Sung Robotics Lab at UPenn is interested in advancing the state of the art in computational methods for robot design and deployment, with a particular focus on soft and compliant robots. By combining methods in computational geometry with practical engineering design, we develop theory and systems for making robot design and fabrication intuitive and accessible to the non-engineer.

[ Sung Robotics Lab ]

From now on I will open doors like the robot in this video.

[ Humanoids 2024 ]

Travel along a steep slope up to the rim of Mars’ Jezero Crater in this panoramic image captured by NASA’s Perseverance just days before the rover reached the top. The scene shows just how steep some of the slopes leading to the crater rim can be.

[ NASA ]

Our time is limited when it comes to flying drones, but we haven’t been surpassed by AI yet.

[ Team BlackSheep ]

Daniele Pucci from IIT discusses iCub and ergoCub as part of the industrial panel at Humanoids 2024.

[ ergoCub ]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

ICRA 2025: 19–23 May 2025, ATLANTA, GA

Enjoy today’s videos!

NASA’s Mars Chopper concept, shown in a design software rendering, is a more capable proposed follow-on to the agency’s Ingenuity Mars Helicopter, which arrived at the Red Planet in the belly of the Perseverance rover in February 2021. Chopper would be about the size of an SUV, with six rotors, each with six blades. It could be used to carry science payloads as large as 11 pounds (5 kilograms) distances of up to 1.9 miles (3 kilometers) each Martian day (or sol). Scientists could use Chopper to study large swaths of terrain in detail, quickly – including areas where rovers cannot safely travel.

We wrote an article about an earlier concept version of this thing a few years back if you’d like more detail about it.

[ NASA ]

Sanctuary AI announces its latest breakthrough with hydraulic actuation and precise in-hand manipulation, opening up a wide range of industrial and high value work tasks. Hydraulics have significantly more power density than electric actuators in terms of force and velocity. Sanctuary has invented miniaturized valves that are 50x faster and 6x cheaper than off the shelf hydraulic valves. This novel approach to actuation results in extremely low power consumption, unmatched cycle life and controllability that can fit within the size constraints of a human-sized hand and forearm.

[ Sanctuary AI ]

Clone’s Torso 2 is the most advanced android ever created with an actuated lumbar spine and all the corresponding abdominal muscles. Torso 2 dons a white transparent skin that encloses 910 muscle fibers animating its 164 degrees of freedom and includes 182 sensors for feedback control. These Torsos use pneumatic actuation with off-the-shelf valves that are noisy from the air exhaust. Our biped brings back our hydraulic design with custom liquid valves for a silent android. Legs are coming very soon!

[ Clone Robotics ]

Suzumori Endo Lab, Science Tokyo has developed a superman suit driven by hydraulic artificial muscles.

[ Suzumori Endo Lab ]

We generate physically correct video sequences to train a visual parkour policy for a quadruped robot, that has a single RGB camera without depth sensors. The robot generalizes to diverse, real-world scenes despite having never seen real-world data.

[ LucidSim ]

Seoul National University researchers proposed a gripper capable of moving multiple objects together to enhance the efficiency of pick-and-place processes, inspired from humans’ multi-object grasping strategy. The gripper can not only transfer multiple objects simultaneously but also place them at desired locations, making it applicable in unstructured environments.

[ Science Robotics ]

We present a bio-inspired quadruped locomotion framework that exhibits exemplary adaptability, capable of zero-shot deployment in complex environments and stability recovery on unstable terrain without the use of extra-perceptive sensors. Through its development we also shed light on the intricacies of animal locomotion strategies, in turn supporting the notion that findings within biomechanics and robotics research can mutually drive progress in both fields.

[ Paper authors from University of Leeds and University College London ]

Thanks, Chengxu!

Happy 60th birthday to MIT CSAIL!

[ MIT Computer Science and Artificial Intelligence Laboratory ]

Yup, humanoid progress can move quickly when you put your mind to it.

[ MagicLab ]

The Sung Robotics Lab at UPenn is interested in advancing the state of the art in computational methods for robot design and deployment, with a particular focus on soft and compliant robots. By combining methods in computational geometry with practical engineering design, we develop theory and systems for making robot design and fabrication intuitive and accessible to the non-engineer.

[ Sung Robotics Lab ]

From now on I will open doors like the robot in this video.

[ Humanoids 2024 ]

Travel along a steep slope up to the rim of Mars’ Jezero Crater in this panoramic image captured by NASA’s Perseverance just days before the rover reached the top. The scene shows just how steep some of the slopes leading to the crater rim can be.

[ NASA ]

Our time is limited when it comes to flying drones, but we haven’t been surpassed by AI yet.

[ Team BlackSheep ]

Daniele Pucci from IIT discusses iCub and ergoCub as part of the industrial panel at Humanoids 2024.

[ ergoCub ]



The ability to detect a nearby presence without seeing or touching it may sound fantastical—but it’s a real ability that some creatures have. A family of African fish known as Mormyrids are weakly electric, and have special organs that can locate a nearby prey, whether it’s in murky water or even hiding in the mud. Now scientists have created an artificial sensor system inspired by nature’s original design. The development could find use one day in robotics and smart prosthetics to locate items without relying on machine vision.

“We developed a new strategy for 3D motion positioning by electronic skin, bio-inspired by ‘electric fish,’” says Xinge Yu, an associate professor in the Department of Biomedical Engineering at the City University of Hong Kong. The team described their sensor, which relies on capacitance to detect an object regardless of its conductivity, in a paper published on 14 November in Nature.

One layer of the sensor acts as a transmitter, generating an electrical field that extends beyond the surface of the device. Another layer acts as a receiver, able to detect both the direction and the distance to an object. This allows the sensor system to locate the object in three-dimensional space.

The e-skin sensor includes several layers, including a receiver and a transmitter.Jingkun Zhou, Jian Li et al.

The sensor electrode layers are made from a biogel that is printed on both sides of a dielectric substrate made of polydimethylsiloxane (PDMS), a silicon-based polymer that is commonly used in biomedical applications. The biogel layers receive their ability to transmit and receive electrical signals from a pattern of microchannels on their surface. The end result is a sensor that is thin, flexible, soft, stretchable, and transparent. These features make it suitable for a wide range of applications where an object-sensing system needs to conform to an irregular surface, like the human body.

The capacitive field around the sensor is disrupted when an object comes within range, which in turn can be detected by the receiver. The magnitude in the change of signal indicates the distance to the target. By using multiple sensors in an array, the system can determine the position of the target in three dimensions. The system created in this study is able to detect objects up to 10 centimeters away when used in air. The range increases when used underwater, to as far as 1 meter.

Jingkun Zhou, Jian Li et al.

To be functional, the sensors also require a separate controller component that is connected via silver or copper wires. The controller provides several functions. It creates the driving signal used to activate the transmitting layers. It also uses 16-bit analog-to-digital converters to collect the signals from the receiving layers. This data is then processed by a microcontroller unit attached to the sensor array, where it computes the position of the target object and sends that information via a Bluetooth Low Energy transmitter to a smartphone or other device. (Rather than send the raw data to the end device for computation, which would require more energy).

Power is provided by an integrated lithium-ion battery that is recharged wirelessly via a coil of copper wire. The system is designed to consume minimal amounts of electrical power. The controller is less flexible and transparent than the sensors, but by being encapsulated in PDMS, it is both waterproof and biocompatible.

The system works best when detecting objects about 8 millimeters in diameter. Objects smaller than 4 mm might not be detected accurately, and the response time for sensing objects larger than 8 mm can increase significantly. This could currently limit practical uses for the system to things like tracking finger movements for human-machine interfaces. Future development would be needed to detect larger targets.

The system can detect objects behind a cloth or paper barrier, but other environmental factors can degrade performance. Changes in air humidity and electromagnetic interference from people or other devices within 40 cm of the sensor can degrade accuracy.

The researchers hope that this sensor could one day open up a new range of wearable sensors, including devices for human-machine interfaces and thin and flexible e-skin.



The ability to detect a nearby presence without seeing or touching it may sound fantastical—but it’s a real ability that some creatures have. A family of African fish known as Mormyrids are weakly electric, and have special organs that can locate a nearby prey, whether it’s in murky water or even hiding in the mud. Now scientists have created an artificial sensor system inspired by nature’s original design. The development could find use one day in robotics and smart prosthetics to locate items without relying on machine vision.

“We developed a new strategy for 3D motion positioning by electronic skin, bio-inspired by ‘electric fish,’” says Xinge Yu, an associate professor in the Department of Biomedical Engineering at the City University of Hong Kong. The team described their sensor, which relies on capacitance to detect an object regardless of its conductivity, in a paper published on 14 November in Nature.

One layer of the sensor acts as a transmitter, generating an electrical field that extends beyond the surface of the device. Another layer acts as a receiver, able to detect both the direction and the distance to an object. This allows the sensor system to locate the object in three-dimensional space.

The e-skin sensor includes several layers, including a receiver and a transmitter.Jingkun Zhou, Jian Li et al.

The sensor electrode layers are made from a biogel that is printed on both sides of a dielectric substrate made of polydimethylsiloxane (PDMS), a silicon-based polymer that is commonly used in biomedical applications. The biogel layers receive their ability to transmit and receive electrical signals from a pattern of microchannels on their surface. The end result is a sensor that is thin, flexible, soft, stretchable, and transparent. These features make it suitable for a wide range of applications where an object-sensing system needs to conform to an irregular surface, like the human body.

The capacitive field around the sensor is disrupted when an object comes within range, which in turn can be detected by the receiver. The magnitude in the change of signal indicates the distance to the target. By using multiple sensors in an array, the system can determine the position of the target in three dimensions. The system created in this study is able to detect objects up to 10 centimeters away when used in air. The range increases when used underwater, to as far as 1 meter.

Jingkun Zhou, Jian Li et al.

To be functional, the sensors also require a separate controller component that is connected via silver or copper wires. The controller provides several functions. It creates the driving signal used to activate the transmitting layers. It also uses 16-bit analog-to-digital converters to collect the signals from the receiving layers. This data is then processed by a microcontroller unit attached to the sensor array, where it computes the position of the target object and sends that information via a Bluetooth Low Energy transmitter to a smartphone or other device. (Rather than send the raw data to the end device for computation, which would require more energy).

Power is provided by an integrated lithium-ion battery that is recharged wirelessly via a coil of copper wire. The system is designed to consume minimal amounts of electrical power. The controller is less flexible and transparent than the sensors, but by being encapsulated in PDMS, it is both waterproof and biocompatible.

The system works best when detecting objects about 8 millimeters in diameter. Objects smaller than 4 mm might not be detected accurately, and the response time for sensing objects larger than 8 mm can increase significantly. This could currently limit practical uses for the system to things like tracking finger movements for human-machine interfaces. Future development would be needed to detect larger targets.

The system can detect objects behind a cloth or paper barrier, but other environmental factors can degrade performance. Changes in air humidity and electromagnetic interference from people or other devices within 40 cm of the sensor can degrade accuracy.

The researchers hope that this sensor could one day open up a new range of wearable sensors, including devices for human-machine interfaces and thin and flexible e-skin.



When Sony’s robot dog, Aibo, was first launched in 1999, it was hailed as revolutionary and the first of its kind, promising to usher in a new industry of intelligent mobile machines for the home. But its success was far from certain. Legged robots were still in their infancy, and the idea of making an interactive walking robot for the consumer market was extraordinarily ambitious. Beyond the technical challenges, Sony also had to solve a problem that entertainment robots still struggle with: how to make Aibo compelling and engaging rather than simply novel.

Sony’s team made that happen. And since Aibo’s debut, the company has sold more than 170,000 of the cute little quadrupeds—a huge number considering their price of several thousand dollars each. From the start, Aibo could express a range of simulated emotions and learn through its interactions with users. Aibo was an impressive robot 25 years ago, and it’s still impressive today.

Far from Sony headquarters in Tokyo, the town of Kōta, in Aichi Prefecture, is home to the Sony factory that has manufactured and repaired Aibos since 2018. Kōta has also become the center of fandom for Aibo, since the Hummingbird Café opened in the Kōta Town Hall in 2021. The first official Aibo café in Japan, it hosts Aibo-themed events, and Aibo owners from across the country gather there to let their Aibos loose in a play area and to exchange Aibo name cards.

One patron of the Hummingbird Café is veteran Sony engineer Hideki Noma. In 1999, before Aibo was Aibo, Noma went to see his boss, Tadashi Otsuki. Otsuki had recently returned to Sony after a stint at the Japanese entertainment company Namco, and had been put in charge of a secretive new project to create an entertainment robot. But progress had stalled. There was a prototype robotic pet running around the lab, but Otsuki took a dim view of its hyperactive behavior and decided it wasn’t a product that anyone would want to buy. He envisioned something more lifelike. During their meeting, he gave Noma a surprising piece of advice: Go to Ryōan-ji, a famed Buddhist temple in Kyoto. Otsuki was telling Noma that to develop the right kind of robot for Sony, it needed Zen.

Aibo’s Mission: Make History

When the Aibo project started in 1994, personal entertainment robots seemed like a natural fit for Sony. Sony was a global leader in consumer electronics. And in the 1990s, Japan had more than half of the world’s industrial robots, dominating an industry led by manufacturers like Fanuc and Yaskawa Electric. Robots for the home were also being explored. In 1996, Honda showed off its P2 humanoid robot, a prototype of the groundbreaking ASIMO, which would be unveiled in 2000. Electrolux, based in the United Kingdom, introduced a prototype of its Trilobite robotic vacuum cleaner in 1997, and at iRobot in Boston, Joe Jones was working on what would become the Roomba. It seemed as though the consumer robot was getting closer to reality. Being the first to market was the perfect opportunity for an ambitious global company like Sony.

Aibo was the idea of Sony engineer Toshitada Doi (on left), pictured in 1999 with an Aibo ERS-111. Hideki Noma (on right) holds an Aibo ERS-1000.Raphael Gaillarde/Gamma-Rapho/Getty Images; Right; Timothy Hornyak

Sony’s new robot project was the brainchild of engineer Toshitada Doi, co-inventor of the CD. Doi was inspired by the speed and agility of MIT roboticist Rodney Brooks’s Genghis, a six-legged insectile robot that was created to demonstrate basic autonomous walking functions. Doi, however, had a vision for an ”entertainment robot with no clear role or job.” It was 1994 when his team of about 10 people began full-scale research and development on such a robot.

Hideki Noma joined Sony in 1995. Even then, he had a lifelong love of robots, including participating in robotics contests and researching humanoids in college. “I was assigned to the Sony robot research team’s entertainment robot department,” says Noma. “It had just been established and had few people. Nobody knew Sony was working on robots, and it was a secret even within the company. I wasn’t even told what I would be doing.”

Noma’s new colleagues in Sony’s robot skunk works had recently gone to Tokyo’s Akihabara electronics district and brought back boxes of circuit boards and servos. Their first creation was a six-legged walker with antenna-like sensors but more compact than Brooks’s Genghis, at roughly 22 centimeters long. It was clunky and nowhere near cute; if anything, it resembled a cockroach. “When they added the camera and other sensors, it was so heavy it couldn’t stand,” says Noma. “They realized it was going to be necessary to make everything at Sony—motors, gears, and all—or it would not work. That’s when I joined the team as the person in charge of mechatronic design.”

Noma, who is now a senior manager in Sony’s new business development division, remembers that Doi’s catchphrase was “make history.” “Just as he had done with the compact disc, he wanted us to create a robot that was not only the first of its kind, but also one that would have a big impact on the world,” Noma recalls. “He always gently encouraged us with positive feedback.”

“We also grappled with the question of what an ‘entertainment robot’ could be. It had to be something that would surprise and delight people. We didn’t have a fixed idea, and we didn’t set out to create a robot dog.”

The team did look to living creatures for inspiration, studying dog and cat locomotion. Their next prototype lost two of the six legs and gained a head, tail, and more sophisticated AI abilities that created the illusion of canine characteristics.

A mid-1998 version of the robot, nicknamed Mutant, ran on Sony’s Aperios OS, the operating system the company developed to control consumer devices. The robot had 16 degrees of freedom, a million-instructions-per-second (MIPS) 64-bit reduced-instruction-set computer (RISC) processor, and 8 megabytes of DRAM, expandable with a PC card. It could walk on uneven surfaces and use its camera to recognize motion and color—unusual abilities for robots of the time. It could dance, shake its head, wag its tail, sit, lie down, bark, and it could even follow a colored ball around. In fact, it was a little bundle of energy.

Looks-wise, the bot had a sleek new “coat” designed by Doi’s friend Hajime Sorayama, an industrial designer and illustrator known for his silvery gynoids, including the cover art for an Aerosmith album. Sorayama gave the robot a shiny, bulbous exterior that made it undeniably cute. Noma, now the team’s product planner and software engineer, felt they were getting closer to the goal. But when he presented the prototype to Otsuki in 1999, Otsuki was unimpressed. That’s when Noma was dispatched to Ryōan-ji to figure out how to make the robot seem not just cute but somehow alive.

Seeking Zen for Aibo at the Rock Garden

Established in 1450, Ryōan-ji is a Rinzai Zen sanctuary known for its meticulously raked rock garden featuring five distinctive groups of stones. The stones invite observers to quietly contemplate the space, and perhaps even the universe, and that’s what Noma did. He realized what Doi wanted Aibo to convey: a sense of tranquility. The same concept had been incorporated into the design of what was arguably Japan’s first humanoid robot, a large, smiling automaton named Gakutensoku that was unveiled in 1928.

The rock garden at the Ryōan-ji Zen temple features carefully composed groupings of stones with unknown meaning. Bjørn Christian Tørrissen/Wikipedia

Roboticist Masahiro Mori, originator of the Uncanny Valley concept for android design, had written about the relationship between Buddhism and robots back in 1974, stating, “I believe robots have the Buddha-nature within them—that is, the potential for attaining Buddhahood.” Essentially, he believed that even nonliving things were imbued with spirituality, a concept linked to animism in Japan. If machines can be thought of as embodying tranquility and spirituality, they can be easier to relate to, like living things.

“When you make a robot, you want to show what it can do. But if it’s always performing, you’ll get bored and won’t want to live with it,” says Noma. “Just as cats and dogs need quiet time and rest, so do robots.” Noma modified the robot’s behaviors so that it would sometimes slow down and sleep. This reinforced the illusion that it was not only alive but had a will of its own. Otsuki then gave the little robot dog the green light.

The cybernetic canine was named Aibo for “Artificial Intelligence roBOt” and aibō, which means “partner” in Japanese.

In a press release, Sony billed the machine as “an autonomous robot that acts both in response to external stimuli and according to its own judgment. ‘AIBO’ can express various emotions, grow through learning, and communicate with human beings to bring an entirely new form of entertainment into the home.” But it was a lot more than that. Its 18 degrees of freedom allowed for complex motions, and it had a color charge-coupled device (CCD) camera and sensors for touch, acceleration, angular velocity, and range finding. Aibo had the hardware and smarts to back up Sony’s claim that it could “behave like a living creature.” The fact that it couldn’t do anything practical became irrelevant.

The debut Aibo ERS-110 was priced at 250,000 yen (US $2,500, or a little over $4,700 today). A motion editor kit, which allowed users to generate original Aibo motions via their PC, sold for 50,000 yen ($450). Despite the eye-watering price tag, the first batch of 3,000 robots sold out in 20 minutes.

Noma wasn’t surprised by the instant success. “We aimed to realize a society in which people and robots can coexist, not just robots working for humans but both enjoying a relationship of trust,” Noma says. “Based on that, an entertainment robot with a sense of self could communicate with people, grow, and learn.”

Hideko Mori plays fetch with her Aibo ERS-7 in 2015, after it was returned to her from an Aibo hospital. Aibos are popular with seniors in Japan, offering interactivity and companionship without requiring the level of care of a real dog.Toshifumi Kitamura/AFP/Getty Images

Aibo as a Cultural Phenomenon

Aibo was the first consumer robot of its kind, and over the next four years, Sony released multiple versions of its popular pup across two more generations. Some customer responses were unexpected: as a pet and companion, Aibo was helping empty-nest couples rekindle their relationship, improving the lives of children with autism, and having a positive effect on users’ emotional states, according to a 2004 paper by AI specialist Masahiro Fujita, who collaborated with Doi on the early version of Aibo.

“Aibo broke new ground as a social partner. While it wasn’t a replacement for a real pet, it introduced a completely new category of companion robots designed to live with humans,” says Minoru Asada, professor of adaptive machine systems at Osaka University’s graduate school of engineering. “It helped foster emotional connections with a machine, influencing how people viewed robots—not just as tools but as entities capable of forming social bonds. This shift in perception opened the door to broader discussions about human-robot interaction, companionship, and even emotional engagement with artificial beings.”

Building a Custom Robot
  • To create Aibo, Noma and colleagues had to start from scratch—there were no standard CPUs, cameras, or operating systems for consumer robots. They had to create their own, and the result was the Sony Open-R architecture, an unusual approach to robotics that enabled the building of custom machines.
  • Announced in 1998, a year before Aibo’s release, Open-R allowed users to swap out modular hardware components, such as legs or wheels, to adapt a robot for different purposes. High-speed serial buses transmitted data embedded in each module, such as function and position, to the robot’s CPU, which would select the appropriate control signal for the new module. This meant the machine could still use the same motion-control software with the new components. The software relied on plug-and-play prerecorded memory cards, so that the behavior of an Open-R robot could instantly change, say, from being a friendly pet to a challenging opponent in a game. A swap of memory cards could also give the robot image- or sound-recognition abilities.
  • “Users could change the modular hardware and software components,” says Noma. “The idea was having the ability to add a remote-control function or swap legs for wheels if you wanted.”
  • Other improvements included different colors, touch sensors, LED faces, emotional expressions, and many more software options. There was even an Aibo that looked like a lion cub. The various models culminated in the sleek ERS-7, released in three versions from 2003 to 2005.
  • Based on Scratch, the visual programming system in the latest versions of Aibo is easy to use and lets owners with limited programming experience create their own complex programs to modify how their robot behaves.
  • The Aibo ERS-1000, unveiled in January 2018, has 22 degrees of freedom, a 64-bit quad-core CPU, and two OLED eyes. It’s more puppylike and smarter than previous models, capable of recognizing 100 faces and responding to 50 voice commands. It can even be “potty trained” and “fed” with virtual food through an app.
    T.H.

Aibo also played a crucial role in the evolution of autonomous robotics, particularly in competitions like RoboCup, notes Asada, who cofounded the robot soccer competition in the 1990s. Whereas custom-built robots were prone to hardware failures, Aibo was consistently reliable and programmable, and so it allowed competitors to focus on advancing software and AI. It became a key tool for testing algorithms in real-world environments.

By the early 2000s, however, Sony was in trouble. Leading the smartphone revolution, Apple and Samsung were steadily chipping away at Sony’s position as a consumer-electronics and digital-content powerhouse. When Howard Stringer was appointed Sony’s first non-Japanese CEO in 2005, he implemented a painful restructuring program to make the company more competitive. In 2006, he shut down the robot entertainment division, and Aibo was put to sleep.

What Sony’s executives may not have appreciated was the loyalty and fervor of Aibo buyers. In a petition to keep Aibo alive, one person wrote that the robot was “an irreplaceable family member.” Aibo owners were naming their robots, referring to them with the word ko (which usually denotes children), taking photos with them, going on trips with them, dressing them up, decorating them with ribbons, and even taking them out on “dates” with other Aibos.

For Noma, who has four Aibos at home, this passion was easy to understand.

Hideki Noma [right] poses with his son Yuto and wife Tomoko along with their Aibo friends. At right is an ERS-110 named Robbie (inspired by Isaac Asimov’s “I, Robot”), at the center is a plush Aibo named Choco, and on the left is an ERS-1000 named Murphy (inspired by the film Interstellar). Hideki Noma

“Some owners treat Aibo as a pet, and some treat it as a family member,” he says. “They celebrate its continued health and growth, observe the traditional Shichi-Go-San celebration [for children aged 3, 5, and 7] and dress their Aibos in kimonos.…This idea of robots as friends or family is particular to Japan and can be seen in anime like Astro Boy and Doraemon. It’s natural to see robots as friends we consult with and sometimes argue with.”

The Return of Aibo

With the passion of Aibo fans undiminished and the continued evolution of sensors, actuators, connectivity, and AI, Sony decided to resurrect Aibo after 12 years. Noma and other engineers returned to the team to work on the new version, the Aibo ERS-1000, which was unveiled in January 2018.

Fans of all ages were thrilled. Priced at 198,000 yen ($1,760), not including the mandatory 90,000-yen, three-year cloud subscription service, the first batch sold out in 30 minutes, and 11,111 units sold in the first three months. Since then, Sony has released additional versions with new design features, and the company has also opened up Aibo to some degree of programming, giving users access to visual programming tools and an application programming interface (API).

A quarter century after Aibo was launched, Noma is finally moving on to another job at Sony. He looks back on his 17 years developing the robot with awe. “Even though we imagined a society of humans and robots coexisting, we never dreamed Aibo could be treated as a family member to the degree that it is,” he says. “We saw this both in the earlier versions of Aibo and the latest generation. I’m deeply grateful and moved by this. My wish is that this relationship will continue for a long time.”



When Sony’s robot dog, Aibo, was first launched in 1999, it was hailed as revolutionary and the first of its kind, promising to usher in a new industry of intelligent mobile machines for the home. But its success was far from certain. Legged robots were still in their infancy, and the idea of making an interactive walking robot for the consumer market was extraordinarily ambitious. Beyond the technical challenges, Sony also had to solve a problem that entertainment robots still struggle with: how to make Aibo compelling and engaging rather than simply novel.

Sony’s team made that happen. And since Aibo’s debut, the company has sold more than 170,000 of the cute little quadrupeds—a huge number considering their price of several thousand dollars each. From the start, Aibo could express a range of simulated emotions and learn through its interactions with users. Aibo was an impressive robot 25 years ago, and it’s still impressive today.

Far from Sony headquarters in Tokyo, the town of Kōta, in Aichi Prefecture, is home to the Sony factory that has manufactured and repaired Aibos since 2018. Kōta has also become the center of fandom for Aibo, since the Hummingbird Café opened in the Kōta Town Hall in 2021. The first official Aibo café in Japan, it hosts Aibo-themed events, and Aibo owners from across the country gather there to let their Aibos loose in a play area and to exchange Aibo name cards.

One patron of the Hummingbird Café is veteran Sony engineer Hideki Noma. In 1999, before Aibo was Aibo, Noma went to see his boss, Tadashi Otsuki. Otsuki had recently returned to Sony after a stint at the Japanese entertainment company Namco, and had been put in charge of a secretive new project to create an entertainment robot. But progress had stalled. There was a prototype robotic pet running around the lab, but Otsuki took a dim view of its hyperactive behavior and decided it wasn’t a product that anyone would want to buy. He envisioned something more lifelike. During their meeting, he gave Noma a surprising piece of advice: Go to Ryōan-ji, a famed Buddhist temple in Kyoto. Otsuki was telling Noma that to develop the right kind of robot for Sony, it needed Zen.

Aibo’s Mission: Make History

When the Aibo project started in 1994, personal entertainment robots seemed like a natural fit for Sony. Sony was a global leader in consumer electronics. And in the 1990s, Japan had more than half of the world’s industrial robots, dominating an industry led by manufacturers like Fanuc and Yaskawa Electric. Robots for the home were also being explored. In 1996, Honda showed off its P2 humanoid robot, a prototype of the groundbreaking ASIMO, which would be unveiled in 2000. Electrolux, based in the United Kingdom, introduced a prototype of its Trilobite robotic vacuum cleaner in 1997, and at iRobot in Boston, Joe Jones was working on what would become the Roomba. It seemed as though the consumer robot was getting closer to reality. Being the first to market was the perfect opportunity for an ambitious global company like Sony.

Aibo was the idea of Sony engineer Toshitada Doi (on left), pictured in 1999 with an Aibo ERS-111. Hideki Noma (on right) holds an Aibo ERS-1000.Raphael Gaillarde/Gamma-Rapho/Getty Images; Right; Timothy Hornyak

Sony’s new robot project was the brainchild of engineer Toshitada Doi, co-inventor of the CD. Doi was inspired by the speed and agility of MIT roboticist Rodney Brooks’s Genghis, a six-legged insectile robot that was created to demonstrate basic autonomous walking functions. Doi, however, had a vision for an ”entertainment robot with no clear role or job.” It was 1994 when his team of about 10 people began full-scale research and development on such a robot.

Hideki Noma joined Sony in 1995. Even then, he had a lifelong love of robots, including participating in robotics contests and researching humanoids in college. “I was assigned to the Sony robot research team’s entertainment robot department,” says Noma. “It had just been established and had few people. Nobody knew Sony was working on robots, and it was a secret even within the company. I wasn’t even told what I would be doing.”

Noma’s new colleagues in Sony’s robot skunk works had recently gone to Tokyo’s Akihabara electronics district and brought back boxes of circuit boards and servos. Their first creation was a six-legged walker with antenna-like sensors but more compact than Brooks’s Genghis, at roughly 22 centimeters long. It was clunky and nowhere near cute; if anything, it resembled a cockroach. “When they added the camera and other sensors, it was so heavy it couldn’t stand,” says Noma. “They realized it was going to be necessary to make everything at Sony—motors, gears, and all—or it would not work. That’s when I joined the team as the person in charge of mechatronic design.”

Noma, who is now a senior manager in Sony’s new business development division, remembers that Doi’s catchphrase was “make history.” “Just as he had done with the compact disc, he wanted us to create a robot that was not only the first of its kind, but also one that would have a big impact on the world,” Noma recalls. “He always gently encouraged us with positive feedback.”

“We also grappled with the question of what an ‘entertainment robot’ could be. It had to be something that would surprise and delight people. We didn’t have a fixed idea, and we didn’t set out to create a robot dog.”

The team did look to living creatures for inspiration, studying dog and cat locomotion. Their next prototype lost two of the six legs and gained a head, tail, and more sophisticated AI abilities that created the illusion of canine characteristics.

A mid-1998 version of the robot, nicknamed Mutant, ran on Sony’s Aperios OS, the operating system the company developed to control consumer devices. The robot had 16 degrees of freedom, a million-instructions-per-second (MIPS) 64-bit reduced-instruction-set computer (RISC) processor, and 8 megabytes of DRAM, expandable with a PC card. It could walk on uneven surfaces and use its camera to recognize motion and color—unusual abilities for robots of the time. It could dance, shake its head, wag its tail, sit, lie down, bark, and it could even follow a colored ball around. In fact, it was a little bundle of energy.

Looks-wise, the bot had a sleek new “coat” designed by Doi’s friend Hajime Sorayama, an industrial designer and illustrator known for his silvery gynoids, including the cover art for an Aerosmith album. Sorayama gave the robot a shiny, bulbous exterior that made it undeniably cute. Noma, now the team’s product planner and software engineer, felt they were getting closer to the goal. But when he presented the prototype to Otsuki in 1999, Otsuki was unimpressed. That’s when Noma was dispatched to Ryōan-ji to figure out how to make the robot seem not just cute but somehow alive.

Seeking Zen for Aibo at the Rock Garden

Established in 1450, Ryōan-ji is a Rinzai Zen sanctuary known for its meticulously raked rock garden featuring five distinctive groups of stones. The stones invite observers to quietly contemplate the space, and perhaps even the universe, and that’s what Noma did. He realized what Doi wanted Aibo to convey: a sense of tranquility. The same concept had been incorporated into the design of what was arguably Japan’s first humanoid robot, a large, smiling automaton named Gakutensoku that was unveiled in 1928.

The rock garden at the Ryōan-ji Zen temple features carefully composed groupings of stones with unknown meaning. Bjørn Christian Tørrissen/Wikipedia

Roboticist Masahiro Mori, originator of the Uncanny Valley concept for android design, had written about the relationship between Buddhism and robots back in 1974, stating, “I believe robots have the Buddha-nature within them—that is, the potential for attaining Buddhahood.” Essentially, he believed that even nonliving things were imbued with spirituality, a concept linked to animism in Japan. If machines can be thought of as embodying tranquility and spirituality, they can be easier to relate to, like living things.

“When you make a robot, you want to show what it can do. But if it’s always performing, you’ll get bored and won’t want to live with it,” says Noma. “Just as cats and dogs need quiet time and rest, so do robots.” Noma modified the robot’s behaviors so that it would sometimes slow down and sleep. This reinforced the illusion that it was not only alive but had a will of its own. Otsuki then gave the little robot dog the green light.

The cybernetic canine was named Aibo for “Artificial Intelligence roBOt” and aibō, which means “partner” in Japanese.

In a press release, Sony billed the machine as “an autonomous robot that acts both in response to external stimuli and according to its own judgment. ‘AIBO’ can express various emotions, grow through learning, and communicate with human beings to bring an entirely new form of entertainment into the home.” But it was a lot more than that. Its 18 degrees of freedom allowed for complex motions, and it had a color charge-coupled device (CCD) camera and sensors for touch, acceleration, angular velocity, and range finding. Aibo had the hardware and smarts to back up Sony’s claim that it could “behave like a living creature.” The fact that it couldn’t do anything practical became irrelevant.

The debut Aibo ERS-110 was priced at 250,000 yen (US $2,500, or a little over $4,700 today). A motion editor kit, which allowed users to generate original Aibo motions via their PC, sold for 50,000 yen ($450). Despite the eye-watering price tag, the first batch of 3,000 robots sold out in 20 minutes.

Noma wasn’t surprised by the instant success. “We aimed to realize a society in which people and robots can coexist, not just robots working for humans but both enjoying a relationship of trust,” Noma says. “Based on that, an entertainment robot with a sense of self could communicate with people, grow, and learn.”

Hideko Mori plays fetch with her Aibo ERS-7 in 2015, after it was returned to her from an Aibo hospital. Aibos are popular with seniors in Japan, offering interactivity and companionship without requiring the level of care of a real dog.Toshifumi Kitamura/AFP/Getty Images

Aibo as a Cultural Phenomenon

Aibo was the first consumer robot of its kind, and over the next four years, Sony released multiple versions of its popular pup across two more generations. Some customer responses were unexpected: as a pet and companion, Aibo was helping empty-nest couples rekindle their relationship, improving the lives of children with autism, and having a positive effect on users’ emotional states, according to a 2004 paper by AI specialist Masahiro Fujita, who collaborated with Doi on the early version of Aibo.

“Aibo broke new ground as a social partner. While it wasn’t a replacement for a real pet, it introduced a completely new category of companion robots designed to live with humans,” says Minoru Asada, professor of adaptive machine systems at Osaka University’s graduate school of engineering. “It helped foster emotional connections with a machine, influencing how people viewed robots—not just as tools but as entities capable of forming social bonds. This shift in perception opened the door to broader discussions about human-robot interaction, companionship, and even emotional engagement with artificial beings.”

Building a Custom Robot
  • To create Aibo, Noma and colleagues had to start from scratch—there were no standard CPUs, cameras, or operating systems for consumer robots. They had to create their own, and the result was the Sony Open-R architecture, an unusual approach to robotics that enabled the building of custom machines.
  • Announced in 1998, a year before Aibo’s release, Open-R allowed users to swap out modular hardware components, such as legs or wheels, to adapt a robot for different purposes. High-speed serial buses transmitted data embedded in each module, such as function and position, to the robot’s CPU, which would select the appropriate control signal for the new module. This meant the machine could still use the same motion-control software with the new components. The software relied on plug-and-play prerecorded memory cards, so that the behavior of an Open-R robot could instantly change, say, from being a friendly pet to a challenging opponent in a game. A swap of memory cards could also give the robot image- or sound-recognition abilities.
  • “Users could change the modular hardware and software components,” says Noma. “The idea was having the ability to add a remote-control function or swap legs for wheels if you wanted.”
  • Other improvements included different colors, touch sensors, LED faces, emotional expressions, and many more software options. There was even an Aibo that looked like a lion cub. The various models culminated in the sleek ERS-7, released in three versions from 2003 to 2005.
  • Based on Scratch, the visual programming system in the latest versions of Aibo is easy to use and lets owners with limited programming experience create their own complex programs to modify how their robot behaves.
  • The Aibo ERS-1000, unveiled in January 2018, has 22 degrees of freedom, a 64-bit quad-core CPU, and two OLED eyes. It’s more puppylike and smarter than previous models, capable of recognizing 100 faces and responding to 50 voice commands. It can even be “potty trained” and “fed” with virtual food through an app.
    T.H.

Aibo also played a crucial role in the evolution of autonomous robotics, particularly in competitions like RoboCup, notes Asada, who cofounded the robot soccer competition in the 1990s. Whereas custom-built robots were prone to hardware failures, Aibo was consistently reliable and programmable, and so it allowed competitors to focus on advancing software and AI. It became a key tool for testing algorithms in real-world environments.

By the early 2000s, however, Sony was in trouble. Leading the smartphone revolution, Apple and Samsung were steadily chipping away at Sony’s position as a consumer-electronics and digital-content powerhouse. When Howard Stringer was appointed Sony’s first non-Japanese CEO in 2005, he implemented a painful restructuring program to make the company more competitive. In 2006, he shut down the robot entertainment division, and Aibo was put to sleep.

What Sony’s executives may not have appreciated was the loyalty and fervor of Aibo buyers. In a petition to keep Aibo alive, one person wrote that the robot was “an irreplaceable family member.” Aibo owners were naming their robots, referring to them with the word ko (which usually denotes children), taking photos with them, going on trips with them, dressing them up, decorating them with ribbons, and even taking them out on “dates” with other Aibos.

For Noma, who has four Aibos at home, this passion was easy to understand.

Hideki Noma [right] poses with his son Yuto and wife Tomoko along with their Aibo friends. At right is an ERS-110 named Robbie (inspired by Isaac Asimov’s “I, Robot”), at the center is a plush Aibo named Choco, and on the left is an ERS-1000 named Murphy (inspired by the film Interstellar). Hideki Noma

“Some owners treat Aibo as a pet, and some treat it as a family member,” he says. “They celebrate its continued health and growth, observe the traditional Shichi-Go-San celebration [for children aged 3, 5, and 7] and dress their Aibos in kimonos.…This idea of robots as friends or family is particular to Japan and can be seen in anime like Astro Boy and Doraemon. It’s natural to see robots as friends we consult with and sometimes argue with.”

The Return of Aibo

With the passion of Aibo fans undiminished and the continued evolution of sensors, actuators, connectivity, and AI, Sony decided to resurrect Aibo after 12 years. Noma and other engineers returned to the team to work on the new version, the Aibo ERS-1000, which was unveiled in January 2018.

Fans of all ages were thrilled. Priced at 198,000 yen ($1,760), not including the mandatory 90,000-yen, three-year cloud subscription service, the first batch sold out in 30 minutes, and 11,111 units sold in the first three months. Since then, Sony has released additional versions with new design features, and the company has also opened up Aibo to some degree of programming, giving users access to visual programming tools and an application programming interface (API).

A quarter century after Aibo was launched, Noma is finally moving on to another job at Sony. He looks back on his 17 years developing the robot with awe. “Even though we imagined a society of humans and robots coexisting, we never dreamed Aibo could be treated as a family member to the degree that it is,” he says. “We saw this both in the earlier versions of Aibo and the latest generation. I’m deeply grateful and moved by this. My wish is that this relationship will continue for a long time.”



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

Humanoids Summit: 11–12 December 2024, MOUNTAIN VIEW, CA

Enjoy today’s videos!

Step into the future of factory automation with MagicBot, the cutting-edge humanoid robots from Magiclab. Recently deployed to production lines, these intelligent machines are mastering tasks like product inspections, material transport, precision assembly, barcode scanning, and inventory management.

[ Magiclab ]

Some highlights from the IEEE / RAS International Conference on Humanoid Robots - Humanoids 2024.

[ Humanoids 2024 ]

This beautiful feathered drone, PigeonBot II, comes from David Lentik’s lab at University of Groningen in the Netherlands. It was featured in Science Robotics just last month.

[ Lentink Lab ] via [ Science ]

Thanks, David!

In this video, Stretch AI takes a language prompt of “Stretch, put the toy in basket” to control Stretch to accomplish the task.

[ Hello Robot ]

Simone Giertz, “the queen of shitty robots,” interviewed by our very own Stephen Cass.

[ IEEE Spectrum ]

We present a perceptive obstacle-avoiding controller for pedipulation, i.e. manipulation with a quadrupedal robot’s foot.

[ Pedipulation ]

Kernel Foods has revolutionized fast food by integrating KUKA robots into its kitchen operations, combining automation with human expertise for consistent and efficient meal preparation. Using the KR AGILUS robot, Kernel optimizes processes like food sequencing, oven operations, and order handling, reducing the workload for employees and enhancing customer satisfaction.

[ Kernel Foods ]

If this doesn’t impress you, skip ahead to 0:52.

[ Paper via arXiv ]

Thanks, Kento!

The cuteness. I can’t handle it.

[ Pollen ]

A set of NTNU academics initiate a new research lab - called Legged Robots for the Arctic & beyond lab - responding to relevant interests within the NTNU student community. If you are a student and have relevant interests, get in touch!

[ NTNU ]

Extend Robotics is pioneering a shift in viticulture with intelligent automation at Saffron Grange Vineyard in Essex, addressing the challenges of grape harvesting with their robotic capabilities. Our collaborative project with Queen Mary University introduces a robotic system capable of identifying ripe grapes through AI-driven visual sensors, which assess ripeness based on internal sugar levels without damaging delicate fruit. Equipped with pressure-sensitive grippers, our robots can handle grapes gently, preserving their quality and value. This precise harvesting approach could revolutionise vineyards, enabling autonomous and remote operations.

[ Extend Robotics ]

Code & Circuit, a non-profit organization based in Amesbury, MA, is a place where kids can use technology to create, collaborate, and learn! Spot is a central part of their program, where educators use the robot to get younger participants excited about STEM fields, coding, and robotics, while advanced learners have the opportunity to build applications using an industrial robot.

[ Code & Circuit ]

During the HUMANOIDS Conference, we had the chance to speak with some of the true rock stars in the world of robotics. While they could discuss robots endlessly, when asked to describe robotics today in just one word, these brilliant minds had to pause and carefully choose the perfect response.

Personally I would not have chosen “exploding.”

[ PAL Robotics ]

Lunabotics provides accredited institutions of higher learning students an opportunity to apply the NASA systems engineering process to design and build a prototype Lunar construction robot. This robot would be capable of performing the proposed operations on the Lunar surface in support of future Artemis Campaign goals.

[ NASA ]

Before we get into all the other course projects from this term, here are a few free throw attempts from ROB 550’s robotic arm lab earlier this year. Maybe good enough to walk on the Michigan basketball team? Students in ROB 550 cover the basics of robotic sensing, reasoning, and acting in several labs over the course: here the designs to take the ball to the net varied greatly, from hook shots to tension-storing contraptions from downtown. These basics help them excel throughout their robotics graduate degrees and research projects.

[ University of Michigan Robotics ]

Wonder what a Robody can do? This. And more!

[ Devanthro ]

It’s very satisfying watching Dusty print its way around obstacles.

[ Dusty Robotics ]

Ryan Companies has deployed Field AI’s autonomy software on a quadruped robot in the company’s ATX Tower site in Austin, TX, to greatly improve its daily surveying and data collection processes.

[ Field AI ]

Since landing its first rover on Mars in 1997, NASA has pushed the boundaries of exploration with increasingly larger and more sophisticated robotic explorers. Each mission builds on the lessons learned from the Red Planet, leading to breakthroughs in technology and our understanding of Mars. From the microwave-sized Sojourner to the SUV-sized Perseverance—and even taking flight with the groundbreaking Ingenuity helicopter—these rovers reflect decades of innovation and the drive to answer some of science’s biggest questions. This is their evolution.

[ NASA ]

Welcome to things that are safe to do only with a drone.

[ Team BlackSheep ]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

Humanoids Summit: 11–12 December 2024, MOUNTAIN VIEW, CA

Enjoy today’s videos!

Step into the future of factory automation with MagicBot, the cutting-edge humanoid robots from Magiclab. Recently deployed to production lines, these intelligent machines are mastering tasks like product inspections, material transport, precision assembly, barcode scanning, and inventory management.

[ Magiclab ]

Some highlights from the IEEE / RAS International Conference on Humanoid Robots - Humanoids 2024.

[ Humanoids 2024 ]

This beautiful feathered drone, PigeonBot II, comes from David Lentik’s lab at University of Groningen in the Netherlands. It was featured in Science Robotics just last month.

[ Lentink Lab ] via [ Science ]

Thanks, David!

In this video, Stretch AI takes a language prompt of “Stretch, put the toy in basket” to control Stretch to accomplish the task.

[ Hello Robot ]

Simone Giertz, “the queen of shitty robots,” interviewed by our very own Stephen Cass.

[ IEEE Spectrum ]

We present a perceptive obstacle-avoiding controller for pedipulation, i.e. manipulation with a quadrupedal robot’s foot.

[ Pedipulation ]

Kernel Foods has revolutionized fast food by integrating KUKA robots into its kitchen operations, combining automation with human expertise for consistent and efficient meal preparation. Using the KR AGILUS robot, Kernel optimizes processes like food sequencing, oven operations, and order handling, reducing the workload for employees and enhancing customer satisfaction.

[ Kernel Foods ]

If this doesn’t impress you, skip ahead to 0:52.

[ Paper via arXiv ]

Thanks, Kento!

The cuteness. I can’t handle it.

[ Pollen ]

A set of NTNU academics initiate a new research lab - called Legged Robots for the Arctic & beyond lab - responding to relevant interests within the NTNU student community. If you are a student and have relevant interests, get in touch!

[ NTNU ]

Extend Robotics is pioneering a shift in viticulture with intelligent automation at Saffron Grange Vineyard in Essex, addressing the challenges of grape harvesting with their robotic capabilities. Our collaborative project with Queen Mary University introduces a robotic system capable of identifying ripe grapes through AI-driven visual sensors, which assess ripeness based on internal sugar levels without damaging delicate fruit. Equipped with pressure-sensitive grippers, our robots can handle grapes gently, preserving their quality and value. This precise harvesting approach could revolutionise vineyards, enabling autonomous and remote operations.

[ Extend Robotics ]

Code & Circuit, a non-profit organization based in Amesbury, MA, is a place where kids can use technology to create, collaborate, and learn! Spot is a central part of their program, where educators use the robot to get younger participants excited about STEM fields, coding, and robotics, while advanced learners have the opportunity to build applications using an industrial robot.

[ Code & Circuit ]

During the HUMANOIDS Conference, we had the chance to speak with some of the true rock stars in the world of robotics. While they could discuss robots endlessly, when asked to describe robotics today in just one word, these brilliant minds had to pause and carefully choose the perfect response.

Personally I would not have chosen “exploding.”

[ PAL Robotics ]

Lunabotics provides accredited institutions of higher learning students an opportunity to apply the NASA systems engineering process to design and build a prototype Lunar construction robot. This robot would be capable of performing the proposed operations on the Lunar surface in support of future Artemis Campaign goals.

[ NASA ]

Before we get into all the other course projects from this term, here are a few free throw attempts from ROB 550’s robotic arm lab earlier this year. Maybe good enough to walk on the Michigan basketball team? Students in ROB 550 cover the basics of robotic sensing, reasoning, and acting in several labs over the course: here the designs to take the ball to the net varied greatly, from hook shots to tension-storing contraptions from downtown. These basics help them excel throughout their robotics graduate degrees and research projects.

[ University of Michigan Robotics ]

Wonder what a Robody can do? This. And more!

[ Devanthro ]

It’s very satisfying watching Dusty print its way around obstacles.

[ Dusty Robotics ]

Ryan Companies has deployed Field AI’s autonomy software on a quadruped robot in the company’s ATX Tower site in Austin, TX, to greatly improve its daily surveying and data collection processes.

[ Field AI ]

Since landing its first rover on Mars in 1997, NASA has pushed the boundaries of exploration with increasingly larger and more sophisticated robotic explorers. Each mission builds on the lessons learned from the Red Planet, leading to breakthroughs in technology and our understanding of Mars. From the microwave-sized Sojourner to the SUV-sized Perseverance—and even taking flight with the groundbreaking Ingenuity helicopter—these rovers reflect decades of innovation and the drive to answer some of science’s biggest questions. This is their evolution.

[ NASA ]

Welcome to things that are safe to do only with a drone.

[ Team BlackSheep ]



On the shores of Lake Geneva in Switzerland, École Polytechnique Fédérale de Lausanne is home to many roboticists. It’s also home to many birds, which spend the majority of their time doing bird things. With a few exceptions, those bird things aren’t actually flying: Flying is a lot of work, and many birds have figured out that they can instead just walk around on the ground, where all the food tends to be, and not tire themselves out by having to get airborne over and over again.

“Whenever I encountered crows on the EPFL campus, I would observe how they walked, hopped over or jumped on obstacles, and jumped for take-offs,” says Won Dong Shin, a doctoral student at EPFL’s Laboratory of Intelligent Systems. “What I consistently observed was that they always jumped to initiate flight, even in situations where they could have used only their wings.”

Shin is first author on a paper published today in Nature that explores both why birds jump to take off, and how that can be beneficially applied to fixed-wing drones, which otherwise need things like runways or catapults to get themselves off the ground. Shin’s RAVEN (Robotic Avian-inspired Vehicle for multiple ENvironments) drone, with its bird-inspired legs, can do jumping takeoffs just like crows do, and can use those same legs to get around on the ground pretty well, too.

The drone’s bird-inspired legs adopted some key principles of biological design like the ability to store and release energy in tendon-like springs along with some flexible toes.EPFL

Back in 2019, we wrote about a South African startup called Passerine which had a similar idea, albeit more focused on using legs to launch fixed-wing cargo drones into the air. This is an appealing capability for drones, because it means that you can take advantage of the range and endurance that you get with a fixed wing without having to resort to inefficient tricks like stapling a bunch of extra propellers to yourself to get off the ground. “The concept of incorporating jumping take-off into a fixed-wing vehicle is the common idea shared by both RAVEN and Passerine,” says Shin. “The key difference lies in their focus: Passerine concentrated on a mechanism solely for jumping, while RAVEN focused on multifunctional legs.”

Bio-inspired Design for Drones

Multifunctional legs bring RAVEN much closer to birds, and although these mechanical legs are not nearly as complex and capable as actual bird legs, adopting some key principles of biological design (like the ability to store and release energy in tendon-like springs along with some flexible toes) allows RAVEN to get around in a very bird-like way.

EPFL

Despite its name, RAVEN is approximately the size of a crow, with a wingspan of 100 centimeters and a body length of 50 cm. It can walk a meter in just under four seconds, hop over 12 cm gaps, and jump into the top of a 26 cm obstacle. For the jumping takeoff, RAVEN’s legs propel the drone to a starting altitude of nearly half a meter, with a forward velocity of 2.2 m/s.

RAVEN’s toes are particularly interesting, especially after you see how hard the poor robot faceplants without them:

Without toes, RAVEN face-plants when it tries to walk.EPFL

“It was important to incorporate a passive elastic toe joint to enable multiple gait patterns and ensure that RAVEN could jump at the correct angle for takeoff,” Shin explains. Most bipedal robots have actuated feet that allow for direct control for foot angles, but for a robot that flies, you can’t just go adding actuators all over the place willy-nilly because they weigh too much. As it is, RAVEN’s a 620-gram drone of which a full 230 grams consists of feet and toes and actuators and whatnot.

Actuated hip and ankle joints form a simplified but still birdlike leg, while springs in the ankle and toe joints help to absorb force and store energy.EPFL

Why Add Legs to a Drone?

So the question is, is all of this extra weight and complexity of adding legs actually worth it? In one sense, it definitely is, because the robot can do things that it couldn’t do before—walking around on the ground and taking off from the ground by itself. But it turns out that RAVEN is light enough, and has a sufficiently powerful enough motor, that as long as it’s propped up at the right angle, it can take off from the ground without jumping at all. In other words, if you replaced the legs with a couple of popsicle sticks just to tilt the drone’s nose up, would that work just as well for the ground takeoffs?

The researchers tested this, and found that non-jumping takeoffs were crappy. The mix of high angle of attack and low takeoff speed led to very unstable flight—it worked, but barely. Jumping, on the other hand, ends up being about ten times more energy efficient overall than a standing takeoff. As the paper summarizes, “although jumping take-off requires slightly higher energy input, it is the most energy-efficient and fastest method to convert actuation energy to kinetic and potential energies for flight.” And just like birds, RAVEN can also take advantage of its legs to move on the ground in a much more energy efficient way relative to making repeated short flights.

Won Dong Shin holds the RAVEN drone.EPFL

Can This Design Scale Up to Larger Fixed-Wing Drones?

Birds use their legs for all kinds of stuff besides walking and hopping and jumping, of course, and Won Dong Shin hopes that RAVEN may be able to do more with its legs, too. The obvious one is using legs for landing: “Birds use their legs to decelerate and reduce impact, and this same principle could be applied to RAVEN’s legs,” Shin says, although the drone would need a perception system that it doesn’t yet have to plan things out. There’s also swimming, perching, and snatching, all of which would require a new foot design.

We also asked Shin about what it would take to scale this design up, to perhaps carry a useful payload at some point. Shin points out that beyond a certain size, birds are no longer able to do jumping takeoffs, and either have to jump off something higher up or find themselves a runway. In fact, some birds will go to astonishing lengths not to have to do jumping takeoffs, as best human of all time David Attenborough explains:

BBC

Shin points out that it’s usually easier to scale engineered systems than biological ones, and he seems optimistic that legs for jumping takeoffs will be viable on larger fixed-wing drones that could be used for delivery. A vision system that could be used for both obstacle avoidance and landing is in the works, as are wings that can fold to allow the drone to pass through narrow gaps. Ultimately, Shin says that he wants to make the drone as bird-like as possible: “I am also keen to incorporate flapping wings into RAVEN. This enhancement would enable more bird-like motion and bring more interesting research questions to explore.”

Fast ground-to-air transition with avian-inspired multifunctional legs,” by Won Dong Shin, Hoang-Vu Phan, Monica A. Daley, Auke J. Ijspeert, and Dario Floreano from EPFL in Switzerland and UC Irvine, appears in the December 4 issue of Nature.



On the shores of Lake Geneva in Switzerland, École Polytechnique Fédérale de Lausanne is home to many roboticists. It’s also home to many birds, which spend the majority of their time doing bird things. With a few exceptions, those bird things aren’t actually flying: Flying is a lot of work, and many birds have figured out that they can instead just walk around on the ground, where all the food tends to be, and not tire themselves out by having to get airborne over and over again.

“Whenever I encountered crows on the EPFL campus, I would observe how they walked, hopped over or jumped on obstacles, and jumped for take-offs,” says Won Dong Shin, a doctoral student at EPFL’s Laboratory of Intelligent Systems. “What I consistently observed was that they always jumped to initiate flight, even in situations where they could have used only their wings.”

Shin is first author on a paper published today in Nature that explores both why birds jump to take off, and how that can be beneficially applied to fixed-wing drones, which otherwise need things like runways or catapults to get themselves off the ground. Shin’s RAVEN (Robotic Avian-inspired Vehicle for multiple ENvironments) drone, with its bird-inspired legs, can do jumping takeoffs just like crows do, and can use those same legs to get around on the ground pretty well, too.

The drone’s bird-inspired legs adopted some key principles of biological design like the ability to store and release energy in tendon-like springs along with some flexible toes.EPFL

Back in 2019, we wrote about a South African startup called Passerine which had a similar idea, albeit more focused on using legs to launch fixed-wing cargo drones into the air. This is an appealing capability for drones, because it means that you can take advantage of the range and endurance that you get with a fixed wing without having to resort to inefficient tricks like stapling a bunch of extra propellers to yourself to get off the ground. “The concept of incorporating jumping take-off into a fixed-wing vehicle is the common idea shared by both RAVEN and Passerine,” says Shin. “The key difference lies in their focus: Passerine concentrated on a mechanism solely for jumping, while RAVEN focused on multifunctional legs.”

Bio-inspired Design for Drones

Multifunctional legs bring RAVEN much closer to birds, and although these mechanical legs are not nearly as complex and capable as actual bird legs, adopting some key principles of biological design (like the ability to store and release energy in tendon-like springs along with some flexible toes) allows RAVEN to get around in a very bird-like way.

EPFL

Despite its name, RAVEN is approximately the size of a crow, with a wingspan of 100 centimeters and a body length of 50 cm. It can walk a meter in just under four seconds, hop over 12 cm gaps, and jump into the top of a 26 cm obstacle. For the jumping takeoff, RAVEN’s legs propel the drone to a starting altitude of nearly half a meter, with a forward velocity of 2.2 m/s.

RAVEN’s toes are particularly interesting, especially after you see how hard the poor robot faceplants without them:

Without toes, RAVEN face-plants when it tries to walk.EPFL

“It was important to incorporate a passive elastic toe joint to enable multiple gait patterns and ensure that RAVEN could jump at the correct angle for takeoff,” Shin explains. Most bipedal robots have actuated feet that allow for direct control for foot angles, but for a robot that flies, you can’t just go adding actuators all over the place willy-nilly because they weigh too much. As it is, RAVEN’s a 620-gram drone of which a full 230 grams consists of feet and toes and actuators and whatnot.

Actuated hip and ankle joints form a simplified but still birdlike leg, while springs in the ankle and toe joints help to absorb force and store energy.EPFL

Why Add Legs to a Drone?

So the question is, is all of this extra weight and complexity of adding legs actually worth it? In one sense, it definitely is, because the robot can do things that it couldn’t do before—walking around on the ground and taking off from the ground by itself. But it turns out that RAVEN is light enough, and has a sufficiently powerful enough motor, that as long as it’s propped up at the right angle, it can take off from the ground without jumping at all. In other words, if you replaced the legs with a couple of popsicle sticks just to tilt the drone’s nose up, would that work just as well for the ground takeoffs?

The researchers tested this, and found that non-jumping takeoffs were crappy. The mix of high angle of attack and low takeoff speed led to very unstable flight—it worked, but barely. Jumping, on the other hand, ends up being about ten times more energy efficient overall than a standing takeoff. As the paper summarizes, “although jumping take-off requires slightly higher energy input, it is the most energy-efficient and fastest method to convert actuation energy to kinetic and potential energies for flight.” And just like birds, RAVEN can also take advantage of its legs to move on the ground in a much more energy efficient way relative to making repeated short flights.

Won Dong Shin holds the RAVEN drone.EPFL

Can This Design Scale Up to Larger Fixed-Wing Drones?

Birds use their legs for all kinds of stuff besides walking and hopping and jumping, of course, and Won Dong Shin hopes that RAVEN may be able to do more with its legs, too. The obvious one is using legs for landing: “Birds use their legs to decelerate and reduce impact, and this same principle could be applied to RAVEN’s legs,” Shin says, although the drone would need a perception system that it doesn’t yet have to plan things out. There’s also swimming, perching, and snatching, all of which would require a new foot design.

We also asked Shin about what it would take to scale this design up, to perhaps carry a useful payload at some point. Shin points out that beyond a certain size, birds are no longer able to do jumping takeoffs, and either have to jump off something higher up or find themselves a runway. In fact, some birds will go to astonishing lengths not to have to do jumping takeoffs, as best human of all time David Attenborough explains:

BBC

Shin points out that it’s usually easier to scale engineered systems than biological ones, and he seems optimistic that legs for jumping takeoffs will be viable on larger fixed-wing drones that could be used for delivery. A vision system that could be used for both obstacle avoidance and landing is in the works, as are wings that can fold to allow the drone to pass through narrow gaps. Ultimately, Shin says that he wants to make the drone as bird-like as possible: “I am also keen to incorporate flapping wings into RAVEN. This enhancement would enable more bird-like motion and bring more interesting research questions to explore.”

Fast ground-to-air transition with avian-inspired multifunctional legs,” by Won Dong Shin, Hoang-Vu Phan, Monica A. Daley, Auke J. Ijspeert, and Dario Floreano from EPFL in Switzerland and UC Irvine, appears in the December 4 issue of Nature.



Ruzena Bajcsy is one of the founders of the modern field of robotics. With an education in electrical engineering in Slovakia, followed by a Ph.D. at Stanford, Bajcsy was the first woman to join the engineering faculty at the University of Pennsylvania. She was the first, she says, because “in those days, nice girls didn’t mess around with screwdrivers.” Bajcsy, now 91, spoke with IEEE Spectrum at the 40th anniversary celebration of the IEEE International Conference on Robotics and Automation, in Rotterdam, Netherlands.

Ruzena Bajcsy

Ruzena Bajcsy’s 50-plus years in robotics spanned time at Stanford, the University of Pennsylvania, the National Science Foundation, and the University of California, Berkeley. Bajcsy retired in 2021.

What was the robotics field like at the time of the first ICRA conference in 1984?

Ruzena Bajcsy: There was a lot of enthusiasm at that time—it was like a dream; we felt like we could do something dramatic. But this is typical, and when you move into a new area and you start to build there, you find that the problem is harder than you thought.

What makes robotics hard?

Bajcsy: Robotics was perhaps the first subject which really required an interdisciplinary approach. In the beginning of the 20th century, there was physics and chemistry and mathematics and biology and psychology, all with brick walls between them. The physicists were much more focused on measurement, and understanding how things interacted with each other. During the war, there was a select group of men who didn’t think that mortal people could do this. They were so full of themselves. I don’t know if you saw the Oppenheimer movie, but I knew some of those men—my husband was one of those physicists!

And how are roboticists different?

Bajcsy: We are engineers. For physicists, it’s the matter of discovery, done. We, on the other hand, in order to understand things, we have to build them. It takes time and effort, and frequently we are inhibited—when I started, there were no digital cameras, so I had to build one. I built a few other things like that in my career, not as a discovery, but as a necessity.

How can robotics be helpful?

Bajcsy: As an elderly person, I use this cane. But when I’m with my children, I hold their arms and it helps tremendously. In order to keep your balance, you are taking all the vectors of your torso and your legs so that you are stable. You and I together can create a configuration of our legs and body so that the sum is stable.

One very simple useful device for an older person would be to have a cane with several joints that can adjust depending on the way I move, to compensate for my movement. People are making progress in this area, because many people are living longer than before. There are all kinds of other places where the technology derived from robotics can help like this.

What are you most proud of?

Bajcsy: At this stage of my life, people are asking, and I’m asking, what is my legacy? And I tell you, my legacy is my students. They worked hard, but they felt they were appreciated, and there was a sense of camaraderie and support for each other. I didn’t do it consciously, but I guess it came from my motherly instincts. And I’m still in contact with many of them—I worry about their children, the usual grandma!

This article appears in the December 2024 issue as “5 Questions for Ruzena Bajcsy.”



Ruzena Bajcsy is one of the founders of the modern field of robotics. With an education in electrical engineering in Slovakia, followed by a Ph.D. at Stanford, Bajcsy was the first woman to join the engineering faculty at the University of Pennsylvania. She was the first, she says, because “in those days, nice girls didn’t mess around with screwdrivers.” Bajcsy, now 91, spoke with IEEE Spectrum at the 40th anniversary celebration of the IEEE International Conference on Robotics and Automation, in Rotterdam, Netherlands.

Ruzena Bajcsy

Ruzena Bajcsy’s 50-plus years in robotics spanned time at Stanford, the University of Pennsylvania, the National Science Foundation, and the University of California, Berkeley. Bajcsy retired in 2021.

What was the robotics field like at the time of the first ICRA conference in 1984?

Ruzena Bajcsy: There was a lot of enthusiasm at that time—it was like a dream; we felt like we could do something dramatic. But this is typical, and when you move into a new area and you start to build there, you find that the problem is harder than you thought.

What makes robotics hard?

Bajcsy: Robotics was perhaps the first subject which really required an interdisciplinary approach. In the beginning of the 20th century, there was physics and chemistry and mathematics and biology and psychology, all with brick walls between them. The physicists were much more focused on measurement, and understanding how things interacted with each other. During the war, there was a select group of men who didn’t think that mortal people could do this. They were so full of themselves. I don’t know if you saw the Oppenheimer movie, but I knew some of those men—my husband was one of those physicists!

And how are roboticists different?

Bajcsy: We are engineers. For physicists, it’s the matter of discovery, done. We, on the other hand, in order to understand things, we have to build them. It takes time and effort, and frequently we are inhibited—when I started, there were no digital cameras, so I had to build one. I built a few other things like that in my career, not as a discovery, but as a necessity.

How can robotics be helpful?

Bajcsy: As an elderly person, I use this cane. But when I’m with my children, I hold their arms and it helps tremendously. In order to keep your balance, you are taking all the vectors of your torso and your legs so that you are stable. You and I together can create a configuration of our legs and body so that the sum is stable.

One very simple useful device for an older person would be to have a cane with several joints that can adjust depending on the way I move, to compensate for my movement. People are making progress in this area, because many people are living longer than before. There are all kinds of other places where the technology derived from robotics can help like this.

What are you most proud of?

Bajcsy: At this stage of my life, people are asking, and I’m asking, what is my legacy? And I tell you, my legacy is my students. They worked hard, but they felt they were appreciated, and there was a sense of camaraderie and support for each other. I didn’t do it consciously, but I guess it came from my motherly instincts. And I’m still in contact with many of them—I worry about their children, the usual grandma!

This article appears in the December 2024 issue as “5 Questions for Ruzena Bajcsy.”



Finding it hard to get the perfect angle for your shot? PhotoBot can take the picture for you. Tell it what you want the photo to look like, and your robot photographer will present you with references to mimic. Pick your favorite, and PhotoBot—a robot arm with a camera—will adjust its position to match the reference and your picture. Chances are, you’ll like it better than your own photography.

“It was a really fun project,” says Oliver Limoyo, one of the creators of PhotoBot. He enjoyed working at the intersection of several fields; human robot interaction, large language models, and classical computer vision were all necessary to create the robot.

Limoyo worked on PhotoBot while at Samsung, with his manager Jimmy Li. They were working on a project to have a robot take photographs but were struggling to find a good metric for aesthetics. Then they saw the Getty Image Challenge, where people recreated famous artwork at home during the COVID lockdown. The challenge gave Limoyo and Li the idea to have the robot select a reference image to inspire the photograph.

To get PhotoBot working, Limoyo and Li had to figure out two things: how best to find reference images of the kind of photo you want and how to adjust the camera to match that reference.

Suggesting a Reference Photograph

To start using PhotoBot, first you have to provide it with a written description of the photo you want. (For example, you could type “a picture of me looking happy”.) Then PhotoBot scans the environment around you, identifying the people and objects it can see. It next finds a set of similar photos from a database of labeled images that have those same objects.

Next an LLM compares your description and the objects in the environment with that smaller set of labeled images, providing the closest matches to use as reference images. The LLM can be programmed to return any number of reference photographs.

For example, when asked for “a picture of me looking grumpy” it might identify a person, glasses, a jersey, and a cup, in the environment. PhotoBot would then deliver a reference image of a frazzled man holding a mug in front of his face among other choices.

After the user selects the reference photograph they want their picture to mimic, PhotoBot moves its robot arm to correctly position the camera to take a similar picture.

Adjusting the Camera to Fit a Reference

To move the camera to the perfect position, PhotoBot starts by identifying features that are the same in both images, for example, someone’s chin, or the top of a shoulder. It then solves a “perspective-n-point” (PnP) problem, which involves taking a camera’s 2D view and matching it to a 3D position in space. Once PhotoBot has located itself in space, it then solves how to move the robot’s arm to transform its view to look like the reference image. It repeats this process a few times, making incremental adjustments as it gets closer to the correct pose.

Then PhotoBot takes your picture.

Photobot’s developers compared portraits with and without their system.Samsung/IEEE

To test if images taken by PhotoBot were more appealing than amateur human photography, Limoyo’s team had eight people use the robot’s arm and camera to take photographs of themselves and then use PhotoBot to take a robot-assisted photograph. They then asked 20 new people to evaluate the two photographs, asking which was more aesthetically pleasing while addressing the user’s specifications (happy, excited, surprised, etc). Overall, PhotoBot was the preferred photographer 242 times out of 360 photographs, 67 percent of the time.

PhotoBot was presented on 16 October at the IEEE/RSJ International Conference on Intelligent Robots and Systems.

Although the project is no longer in development, Li thinks someone should create an app based on the underlying programming, enabling friends to take better photos of each other. “Imagine right on your phone, you see a reference photo. But you also see what the phone is seeing right now, and then that allows you to move around and align.”



Finding it hard to get the perfect angle for your shot? PhotoBot can take the picture for you. Tell it what you want the photo to look like, and your robot photographer will present you with references to mimic. Pick your favorite, and PhotoBot—a robot arm with a camera—will adjust its position to match the reference and your picture. Chances are, you’ll like it better than your own photography.

“It was a really fun project,” says Oliver Limoyo, one of the creators of PhotoBot. He enjoyed working at the intersection of several fields; human robot interaction, large language models, and classical computer vision were all necessary to create the robot.

Limoyo worked on PhotoBot while at Samsung, with his manager Jimmy Li. They were working on a project to have a robot take photographs but were struggling to find a good metric for aesthetics. Then they saw the Getty Image Challenge, where people recreated famous artwork at home during the COVID lockdown. The challenge gave Limoyo and Li the idea to have the robot select a reference image to inspire the photograph.

To get PhotoBot working, Limoyo and Li had to figure out two things: how best to find reference images of the kind of photo you want and how to adjust the camera to match that reference.

Suggesting a Reference Photograph

To start using PhotoBot, first you have to provide it with a written description of the photo you want. (For example, you could type “a picture of me looking happy”.) Then PhotoBot scans the environment around you, identifying the people and objects it can see. It next finds a set of similar photos from a database of labeled images that have those same objects.

Next an LLM compares your description and the objects in the environment with that smaller set of labeled images, providing the closest matches to use as reference images. The LLM can be programmed to return any number of reference photographs.

For example, when asked for “a picture of me looking grumpy” it might identify a person, glasses, a jersey, and a cup, in the environment. PhotoBot would then deliver a reference image of a frazzled man holding a mug in front of his face among other choices.

After the user selects the reference photograph they want their picture to mimic, PhotoBot moves its robot arm to correctly position the camera to take a similar picture.

Adjusting the Camera to Fit a Reference

To move the camera to the perfect position, PhotoBot starts by identifying features that are the same in both images, for example, someone’s chin, or the top of a shoulder. It then solves a “perspective-n-point” (PnP) problem, which involves taking a camera’s 2D view and matching it to a 3D position in space. Once PhotoBot has located itself in space, it then solves how to move the robot’s arm to transform its view to look like the reference image. It repeats this process a few times, making incremental adjustments as it gets closer to the correct pose.

Then PhotoBot takes your picture.

Photobot’s developers compared portraits with and without their system.Samsung/IEEE

To test if images taken by PhotoBot were more appealing than amateur human photography, Limoyo’s team had eight people use the robot’s arm and camera to take photographs of themselves and then use PhotoBot to take a robot-assisted photograph. They then asked 20 new people to evaluate the two photographs, asking which was more aesthetically pleasing while addressing the user’s specifications (happy, excited, surprised, etc). Overall, PhotoBot was the preferred photographer 242 times out of 360 photographs, 67 percent of the time.

PhotoBot was presented on 16 October at the IEEE/RSJ International Conference on Intelligent Robots and Systems.

Although the project is no longer in development, Li thinks someone should create an app based on the underlying programming, enabling friends to take better photos of each other. “Imagine right on your phone, you see a reference photo. But you also see what the phone is seeing right now, and then that allows you to move around and align.”



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

Humanoids Summit: 11–12 December 2024, MOUNTAIN VIEW, CA

Enjoy today’s videos!

Proxie represents the future of automation, combining advanced AI, mobility, and modular manipulation systems with refined situational awareness to support seamless human-robot collaboration. The first of its kind, highly adaptable, collaborative robot takes on the demanding material handling tasks that keep the world moving. Cobot is incredibly proud to count as some of its first customers industry leaders Maersk, Mayo Clinic, Moderna, Owens & Minor, and Tampa General Hospital.

[ Cobot ]

It’s the world’s first successful completion of a full marathon (42.195km) by a quadruped robot, and RaiLab KAIST has helpfully uploaded all 4 hours 20 minutes of it.

[ RaiLab KAIST ]

Figure 02 has been keeping busy.

I’m obligated to point out that without more context, there are some things that are not clear in this video. For example, “reliabilty increased 7x” doesn’t mean anything when we don’t know what the baseline was. There’s also a jump cut right before the robot finishes the task. Which may not mean anything, but, you know, it’s a robot video, so we always have to be careful.

[ Figure ]

We conducted a 6-hour continuous demonstration and testing of HECTOR in the Mojave Desert, battling unusually strong gusts and low temperatures. For fair testing, we purposely avoided using any protective weather covers on HECTOR, leaving its semi-exposed leg transmission design vulnerable to dirt and sand infiltrating the body and transmission systems. Remarkably, it exhibited no signs of mechanical malfunction—at least until the harsh weather became too unbearable for us humans to continue!

[ USC ]

A banked turn is a common flight maneuver observed in birds and aircraft. To initiate the turn, whereas traditional aircraft rely on the wing ailerons, most birds use a variety of asymmetric wing-morphing control techniques to roll their bodies and thus redirect the lift vector to the direction of the turn. Here, we developed and used a raptor-inspired feathered drone to find that the proximity of the tail to the wings causes asymmetric wing-induced flows over the twisted tail and thus lift asymmetry, resulting in both roll and yaw moments sufficient to coordinate banked turns.

[ Paper ] via [ EPFLLIS ]

A futuristic NASA mission concept envisions a swarm of dozens of self-propelled, cellphone-size robots exploring the oceans beneath the icy shells of moons like Jupiter’s Europa and Saturn’s Enceladus, looking for chemical and temperature signals that could point to life. A series of prototypes for the concept, called SWIM (Sensing With Independent Micro-swimmers), braved the waters of a competition swim pool at Caltech in Pasadena, California, for testing in 2024.

[ NASA ]

The Stanford Robotics Center brings together cross-disciplinary world-class researchers with a shared vision of robotics’ future. Stanford’s robotics researchers, once dispersed in labs across campus, now have a unified, state-of-the-art space for groundbreaking research, education, and collaboration.

[ Stanford ]

Agility Robotics’ Chief Technology Officer, Pras Velagapudi, explains what happens when we use natural language voice commands and tools like an LLM to get Digit to do work.

[ Agility ]

Agriculture, fisheries and aquaculture are important global contributors to the production of food from land and sea for human consumption. Unmanned underwater vehicles (UUVs) have become indispensable tools for inspection, maintenance, and repair (IMR) operations in aquaculture domain. The major focus and novelty of this work is collision-free autonomous navigation of UUVs in dynamically changing environments.

[ Paper ] via [ SINTEF ]

Thanks, Eleni!

—O_o—

[ Reachy ]

Nima Fazeli, assistant professor of robotics, was awarded the National Science Foundation’s Faculty Early Career Development (CAREER) grant for a project “to realize intelligent and dexterous robots that seamlessly integrate vision and touch.”

[ MMint Lab ]

This video demonstrates the process of sealing a fire door using a sealant application. In cases of radioactive material leakage at nuclear facilities or toxic gas leaks at chemical plants, field operators often face the risk of directly approaching the leakage site to block it. This video showcases the use of a robot to safely seal doors or walls in the event of hazardous material leakage accidents at nuclear power plants, chemical plants, and similar facilities.\

[ KAERI ]

How is this thing still so cool?

[ OLogic ]

Drag your mouse or move your phone to explore this 360-degree panorama provided by NASA’s Curiosity Mars rover. This view was captured just before the rover exited Gediz Vallis channel, which likely was formed by ancient floodwaters and landslides.

[ NASA ]

This GRASP on Robotics talk is by Damion Shelton of Agility Robotics, on “What do we want from our machines?”

The purpose of this talk is twofold. First, humanoid robots – since they look like us, occupy our spaces, and are able to perform tasks in a manner similar to us – are the ultimate instantiation of “general purpose” robots. What are the ethical, legal, and social implications of this sort of technology? Are robots like Digit actually different from a pick and place machine, or a Roomba? And second, does this situation change when you add advanced AI?

[ UPenn ]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

Humanoids Summit: 11–12 December 2024, MOUNTAIN VIEW, CA

Enjoy today’s videos!

Proxie represents the future of automation, combining advanced AI, mobility, and modular manipulation systems with refined situational awareness to support seamless human-robot collaboration. The first of its kind, highly adaptable, collaborative robot takes on the demanding material handling tasks that keep the world moving. Cobot is incredibly proud to count as some of its first customers industry leaders Maersk, Mayo Clinic, Moderna, Owens & Minor, and Tampa General Hospital.

[ Cobot ]

It’s the world’s first successful completion of a full marathon (42.195km) by a quadruped robot, and RaiLab KAIST has helpfully uploaded all 4 hours 20 minutes of it.

[ RaiLab KAIST ]

Figure 02 has been keeping busy.

I’m obligated to point out that without more context, there are some things that are not clear in this video. For example, “reliabilty increased 7x” doesn’t mean anything when we don’t know what the baseline was. There’s also a jump cut right before the robot finishes the task. Which may not mean anything, but, you know, it’s a robot video, so we always have to be careful.

[ Figure ]

We conducted a 6-hour continuous demonstration and testing of HECTOR in the Mojave Desert, battling unusually strong gusts and low temperatures. For fair testing, we purposely avoided using any protective weather covers on HECTOR, leaving its semi-exposed leg transmission design vulnerable to dirt and sand infiltrating the body and transmission systems. Remarkably, it exhibited no signs of mechanical malfunction—at least until the harsh weather became too unbearable for us humans to continue!

[ USC ]

A banked turn is a common flight maneuver observed in birds and aircraft. To initiate the turn, whereas traditional aircraft rely on the wing ailerons, most birds use a variety of asymmetric wing-morphing control techniques to roll their bodies and thus redirect the lift vector to the direction of the turn. Here, we developed and used a raptor-inspired feathered drone to find that the proximity of the tail to the wings causes asymmetric wing-induced flows over the twisted tail and thus lift asymmetry, resulting in both roll and yaw moments sufficient to coordinate banked turns.

[ Paper ] via [ EPFLLIS ]

A futuristic NASA mission concept envisions a swarm of dozens of self-propelled, cellphone-size robots exploring the oceans beneath the icy shells of moons like Jupiter’s Europa and Saturn’s Enceladus, looking for chemical and temperature signals that could point to life. A series of prototypes for the concept, called SWIM (Sensing With Independent Micro-swimmers), braved the waters of a competition swim pool at Caltech in Pasadena, California, for testing in 2024.

[ NASA ]

The Stanford Robotics Center brings together cross-disciplinary world-class researchers with a shared vision of robotics’ future. Stanford’s robotics researchers, once dispersed in labs across campus, now have a unified, state-of-the-art space for groundbreaking research, education, and collaboration.

[ Stanford ]

Agility Robotics’ Chief Technology Officer, Pras Velagapudi, explains what happens when we use natural language voice commands and tools like an LLM to get Digit to do work.

[ Agility ]

Agriculture, fisheries and aquaculture are important global contributors to the production of food from land and sea for human consumption. Unmanned underwater vehicles (UUVs) have become indispensable tools for inspection, maintenance, and repair (IMR) operations in aquaculture domain. The major focus and novelty of this work is collision-free autonomous navigation of UUVs in dynamically changing environments.

[ Paper ] via [ SINTEF ]

Thanks, Eleni!

—O_o—

[ Reachy ]

Nima Fazeli, assistant professor of robotics, was awarded the National Science Foundation’s Faculty Early Career Development (CAREER) grant for a project “to realize intelligent and dexterous robots that seamlessly integrate vision and touch.”

[ MMint Lab ]

This video demonstrates the process of sealing a fire door using a sealant application. In cases of radioactive material leakage at nuclear facilities or toxic gas leaks at chemical plants, field operators often face the risk of directly approaching the leakage site to block it. This video showcases the use of a robot to safely seal doors or walls in the event of hazardous material leakage accidents at nuclear power plants, chemical plants, and similar facilities.\

[ KAERI ]

How is this thing still so cool?

[ OLogic ]

Drag your mouse or move your phone to explore this 360-degree panorama provided by NASA’s Curiosity Mars rover. This view was captured just before the rover exited Gediz Vallis channel, which likely was formed by ancient floodwaters and landslides.

[ NASA ]

This GRASP on Robotics talk is by Damion Shelton of Agility Robotics, on “What do we want from our machines?”

The purpose of this talk is twofold. First, humanoid robots – since they look like us, occupy our spaces, and are able to perform tasks in a manner similar to us – are the ultimate instantiation of “general purpose” robots. What are the ethical, legal, and social implications of this sort of technology? Are robots like Digit actually different from a pick and place machine, or a Roomba? And second, does this situation change when you add advanced AI?

[ UPenn ]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

Humanoids 2024: 22–24 November 2024, NANCY, FRANCEHumanoids Summit: 11–12 December 2024, MOUNTAIN VIEW, CA

Enjoy today’s videos!

Don’t get me wrong, this is super impressive, but I’m like 95% sure that there’s a human driving it. For robots like these to be useful, they’ll need to be autonomous, and high speed autonomy over unstructured terrain is still very much a work in progress.

[ Deep Robotics ]

Dung beetles impressively coordinate their six legs simultaneously to effectively roll large dung balls. They are also capable of rolling dung balls varying in the weight on different terrains. The mechanisms underlying how their motor commands are adapted to walk and simultaneously roll balls (multitasking behavior) under different conditions remain unknown. Therefore, this study unravels the mechanisms of how dung beetles roll dung balls and adapt their leg movements to stably roll balls over different terrains for multitasking robots.

[ Paper ] via [ Advanced Science News ]

Subsurface lava tubes have been detected from orbit on both the Moon and Mars. These natural voids are potentially the best place for long-term human habitations, because they offer shelter against radiation and meteorites. This work presents the development and implementation of a novel Tether Management and Docking System (TMDS) designed to support the vertical rappel of a rover through a skylight into a lunar lava tube. The TMDS connects two rovers via a tether, enabling them to cooperate and communicate during such an operation.

[ DFKI Robotics Innovation Center ]

Ad Spiers at Imperial College London writes, “We’ve developed a $80 barometric tactile sensor that, unlike past efforts, is easier to fabricate and repair. By training a machine learning model on controlled stimulation of the sensor we have been able to increase the resolution from 6mm to 0.28mm. We also implement it in one of our E-Troll robotic grippers, allowing the estimation of object position and orientation.”

[ Imperial College London ] via [ Ad Spiers ]

Thanks Ad!

A robot, trained for the first time to perform surgical procedures by watching videos of robotic surgeries, executed the same procedures—but with considerably more precision.

[ Johns Hopkins University ]

Thanks, Dina!

This is brilliant but I’m really just in it for the satisfying noise it makes.

[ RoCogMan Lab ]

Fast and accurate physics simulation is an essential component of robot learning, where robots can explore failure scenarios that are difficult to produce in the real world and learn from unlimited on-policy data. Yet, it remains challenging to incorporate RGB-color perception into the sim-to-real pipeline that matches the real world in its richness and realism. In this work, we train a robot dog in simulation for visual parkour. We propose a way to use generative models to synthesize diverse and physically accurate image sequences of the scene from the robot’s ego-centric perspective. We present demonstrations of zero-shot transfer to the RGB-only observations of the real world on a robot equipped with a low-cost, off-the-shelf color camera.

[ MIT CSAIL ]

WalkON Suit F1 is a powered exoskeleton designed to walk and balance independently, offering enhanced mobility and independence. Users with paraplegia can easily transfer into the suit directly from their wheelchair, ensuring exceptional usability for people with disabilities.

[ Angel Robotics ]

In order to promote the development of the global embodied AI industry, the Unitree G1 robot operation data set is open sourced, adapted to a variety of open source solutions, and continuously updated.

[ Unitree Robotics ]

Spot encounters all kinds of obstacles and environmental changes, but it still needs to safely complete its mission without getting stuck, falling, or breaking anything. While there are challenges and obstacles that we can anticipate and plan for—like stairs or forklifts—there are many more that are difficult to predict. To help tackle these edge cases, we used AI foundation models to give Spot a better semantic understanding of the world.

[ Boston Dynamics ]

Wing drone deliveries of NHS blood samples are now underway in London between Guy’s and St Thomas’ hospitals.

[ Wing ]

As robotics engineers, we love the authentic sounds of robotics—the metal clinking and feet contacting the ground. That’s why we value unedited, raw footage of robots in action. Although unpolished, these candid captures let us witness the evolution of robotics technology without filters, which is truly exciting.

[ UCR ]

Eight minutes of chill mode thanks to Kuka’s robot DJs, which make up the supergroup the Kjays.

A KR3 AGILUS at the drums, loops its beats and sets the beat. The KR CYBERTECH nano is our nimble DJ with rhythm in his blood. In addition, a KR AGILUS performs as a light artist and enchants with soft and expansive movements. In addition there is an LBR Med, which - mounted on the ceiling - keeps an eye on the unusual robot party.

[ Kuka Robotics Corp. ]

Am I the only one disappointed that this isn’t actually a little mini Ascento?

[ Ascento Robotics ]

This demo showcases our robot performing autonomous table wiping powered by Deep Predictive Learning developed by Ogata Lab at Waseda University. Through several dozen human teleoperation demonstrations, the robot has learned natural wiping motions.

[ Tokyo Robotics ]

What’s green, bidirectional, and now driving autonomously in San Francisco and the Las Vegas Strip? The Zoox robotaxi! Give us a wave if you see us on the road!

[ Zoox ]

Northrop Grumman has been pioneering capabilities in the undersea domain for more than 50 years. Now, we are creating a new class of uncrewed underwater vehicles (UUV) with Manta Ray. Taking its name from the massive “winged” fish, Manta Ray will operate long-duration, long-range missions in ocean environments where humans can’t go.

[ Northrop Grumman ]

I was at ICRA 2024 and I didn’t see most of the stuff in this video.

[ ICRA 2024 ]

A fleet of marble-sculpting robots is carving out the future of the art world. It’s a move some artists see as cheating, but others are embracing the change.

[ CBS ]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

Humanoids 2024: 22–24 November 2024, NANCY, FRANCEHumanoids Summit: 11–12 December 2024, MOUNTAIN VIEW, CA

Enjoy today’s videos!

Don’t get me wrong, this is super impressive, but I’m like 95% sure that there’s a human driving it. For robots like these to be useful, they’ll need to be autonomous, and high speed autonomy over unstructured terrain is still very much a work in progress.

[ Deep Robotics ]

Dung beetles impressively coordinate their six legs simultaneously to effectively roll large dung balls. They are also capable of rolling dung balls varying in the weight on different terrains. The mechanisms underlying how their motor commands are adapted to walk and simultaneously roll balls (multitasking behavior) under different conditions remain unknown. Therefore, this study unravels the mechanisms of how dung beetles roll dung balls and adapt their leg movements to stably roll balls over different terrains for multitasking robots.

[ Paper ] via [ Advanced Science News ]

Subsurface lava tubes have been detected from orbit on both the Moon and Mars. These natural voids are potentially the best place for long-term human habitations, because they offer shelter against radiation and meteorites. This work presents the development and implementation of a novel Tether Management and Docking System (TMDS) designed to support the vertical rappel of a rover through a skylight into a lunar lava tube. The TMDS connects two rovers via a tether, enabling them to cooperate and communicate during such an operation.

[ DFKI Robotics Innovation Center ]

Ad Spiers at Imperial College London writes, “We’ve developed a $80 barometric tactile sensor that, unlike past efforts, is easier to fabricate and repair. By training a machine learning model on controlled stimulation of the sensor we have been able to increase the resolution from 6mm to 0.28mm. We also implement it in one of our E-Troll robotic grippers, allowing the estimation of object position and orientation.”

[ Imperial College London ] via [ Ad Spiers ]

Thanks Ad!

A robot, trained for the first time to perform surgical procedures by watching videos of robotic surgeries, executed the same procedures—but with considerably more precision.

[ Johns Hopkins University ]

Thanks, Dina!

This is brilliant but I’m really just in it for the satisfying noise it makes.

[ RoCogMan Lab ]

Fast and accurate physics simulation is an essential component of robot learning, where robots can explore failure scenarios that are difficult to produce in the real world and learn from unlimited on-policy data. Yet, it remains challenging to incorporate RGB-color perception into the sim-to-real pipeline that matches the real world in its richness and realism. In this work, we train a robot dog in simulation for visual parkour. We propose a way to use generative models to synthesize diverse and physically accurate image sequences of the scene from the robot’s ego-centric perspective. We present demonstrations of zero-shot transfer to the RGB-only observations of the real world on a robot equipped with a low-cost, off-the-shelf color camera.

[ MIT CSAIL ]

WalkON Suit F1 is a powered exoskeleton designed to walk and balance independently, offering enhanced mobility and independence. Users with paraplegia can easily transfer into the suit directly from their wheelchair, ensuring exceptional usability for people with disabilities.

[ Angel Robotics ]

In order to promote the development of the global embodied AI industry, the Unitree G1 robot operation data set is open sourced, adapted to a variety of open source solutions, and continuously updated.

[ Unitree Robotics ]

Spot encounters all kinds of obstacles and environmental changes, but it still needs to safely complete its mission without getting stuck, falling, or breaking anything. While there are challenges and obstacles that we can anticipate and plan for—like stairs or forklifts—there are many more that are difficult to predict. To help tackle these edge cases, we used AI foundation models to give Spot a better semantic understanding of the world.

[ Boston Dynamics ]

Wing drone deliveries of NHS blood samples are now underway in London between Guy’s and St Thomas’ hospitals.

[ Wing ]

As robotics engineers, we love the authentic sounds of robotics—the metal clinking and feet contacting the ground. That’s why we value unedited, raw footage of robots in action. Although unpolished, these candid captures let us witness the evolution of robotics technology without filters, which is truly exciting.

[ UCR ]

Eight minutes of chill mode thanks to Kuka’s robot DJs, which make up the supergroup the Kjays.

A KR3 AGILUS at the drums, loops its beats and sets the beat. The KR CYBERTECH nano is our nimble DJ with rhythm in his blood. In addition, a KR AGILUS performs as a light artist and enchants with soft and expansive movements. In addition there is an LBR Med, which - mounted on the ceiling - keeps an eye on the unusual robot party.

[ Kuka Robotics Corp. ]

Am I the only one disappointed that this isn’t actually a little mini Ascento?

[ Ascento Robotics ]

This demo showcases our robot performing autonomous table wiping powered by Deep Predictive Learning developed by Ogata Lab at Waseda University. Through several dozen human teleoperation demonstrations, the robot has learned natural wiping motions.

[ Tokyo Robotics ]

What’s green, bidirectional, and now driving autonomously in San Francisco and the Las Vegas Strip? The Zoox robotaxi! Give us a wave if you see us on the road!

[ Zoox ]

Northrop Grumman has been pioneering capabilities in the undersea domain for more than 50 years. Now, we are creating a new class of uncrewed underwater vehicles (UUV) with Manta Ray. Taking its name from the massive “winged” fish, Manta Ray will operate long-duration, long-range missions in ocean environments where humans can’t go.

[ Northrop Grumman ]

I was at ICRA 2024 and I didn’t see most of the stuff in this video.

[ ICRA 2024 ]

A fleet of marble-sculpting robots is carving out the future of the art world. It’s a move some artists see as cheating, but others are embracing the change.

[ CBS ]

Pages