Feed aggregator



When we think about robotic manipulation, the default is usually to think about grippers—about robots using manipulators (like fingers or other end effectors) to interact with objects. For most humans, though, interacting with objects can be a lot more complicated, and we use whatever body parts are convenient to help us deal with objects that are large or heavy or awkward.

This somewhat constrained definition of robotic manipulation isn’t robotics’ fault, really. The word “manipulation” itself comes from the Latin for getting handsy with stuff, so there’s a millennium or two’s-worth of hand-related inertia behind the term. The Los Altos, Calif.-based Toyota Research Institute (TRI) is taking a more expansive view with their new humanoid, Punyo, which uses its soft body to help it manipulate objects that would otherwise be pretty much impossible to manage with grippers alone.

“An anthropomorphic embodiment allows us to explore the complexities of social interactions like physical assistance, non-verbal communication, intent, predictability, and trust, to name just a few.” —Alex Alspach, Toyota Research Institute (TRI)

Punyo started off as just a squishy gripper at TRI, but the idea was always to scale up to a big squishy humanoid, hence this concept art of a squishified T-HR3:

This concept image shows what Toyota’s T-HR3 humanoid might look like when bubble-ized.TRI

“We use the term ‘bubble-ized,’ says Alex Alspach, Tech Lead for Punyo at TRI. Alspach tells us that the concept art above doesn’t necessarily reflect what the Punyo humanoid will eventually look like, but “it gave us some physical constraints and a design language. It also reinforced the idea that we are after general hardware and software solutions that can augment and enable both future and existing robots to take full advantage of their whole bodies for manipulation.”

This version of Punyo isn’t quite at “whole” body manipulation, but it can get a lot done using its arms and chest, which are covered with air bladders that provide both sensing and compliance:

Many of those motions look very human-like, because this is how humans manipulate things. Not to throw too much shade at all those humanoid warehouse robots, but as is pointed out in the video above, using just our hands outstretched in front of us to lift things is not how humans do it, because using other parts of our bodies to provide extra support makes lifting easier. This is not a trivial problem for robots, though, because interactions between point contacts that are rigid (like how most robotics manipulators handle the world) are fairly well understood. Once you throw big squishy surfaces into the mix, along with big squishy objects, it’s just not something that most robots are ready for.

“A soft robot does not interact with the world at a single point.” —Russ Tedrake, TRI

“Current robot manipulation evolved from big, strong industrial robots moving car parts and big tools with their end effectors,” Alspach says. “I think it’s wise to take inspiration from the human form—we are strong enough to perform most everyday tasks with our hands, but when a big, heavy object comes around, we need to get creative with how we wrap our arms around it and position our body to lift it.”

Robots are notorious for lifting big and heavy objects, primarily by manipulating them with robot-y form factors in robot-y ways. So what’s so great about the human form factor, anyway? This question goes way beyond Punyo, of course, but we wanted to get the Punyo team’s take on humanoids, and we tossed a couple more questions at them just for fun.

IEEE Spectrum: So why humanoids?

Alspach: The humanoid robot checks a few important boxes. First of all, the environments we intend to work in were built for humans, so the humanoid form helps a robot make use of the spaces and tools around it. Independently, multiple teams at TRI have converged on bi-manual systems for tasks like grocery shopping and food preparation. A chest between these arms is a simple addition that gives us useful contact surfaces for manipulating big objects, too. Furthermore, our Human-Robot Interaction (HRI) team has done, and continues to do, extensive research with older adults, the people we look forward to helping the most. An anthropomorphic embodiment allows us to explore the complexities of social interactions like physical assistance, non-verbal communication, intent, predictability, and trust, to name just a few.

“We focus not on highly precise tasks but on gross, whole-body manipulation, where robust strategies help stabilize and control objects, and a bit of sloppiness can be an asset.” —Alex Alspach, TRI

Does having a bubble-ized robot make anything more difficult for you?

Russ Tedrake, VP of Robotics Research: If you think of your robot as interacting with the world at a point—the standard view from e.g. impedance control—then putting a soft, passive spring in series between your robot and the world does limit performance. It reduces your control bandwidth. But that view misses the more important point. A soft robot does not interact with the world at a single point. Soft materials fundamentally change the dynamics of contact by deforming around the material—generating patch contacts that allow contact forces and moments not achievable by a rigid interaction.

Alspach: Punyo’s softness is extreme compared to other manipulation platforms that may, say, just have rubber pads on their arms or fingers. This compliance means that when we grab an object, it may not settle exactly where we planned for it to, or, for example, if we bump that object up against the edge of a table, it may move within our grasp. For these reasons, tactile sensing is an important part of our solution as we dig into how to measure and control the state of the objects we manipulate. We focus not on highly precise tasks but on gross, whole-body manipulation, where robust strategies help stabilize and control objects, and a bit of sloppiness can be an asset.

Compliance can be accomplished in different ways, including just in software. What’s the importance of having a robot that’s physically squishy rather than just one that acts squishily?

Andrew Beaulieu, Punyo Tech Lead: We do not believe that passive and active compliance should be considered mutually exclusive, and there are several advantages to having a physically squishy robot, especially when we consider having a robot operate near people and in their spaces. Having a robot that can safely make contact with the world opens up avenues of interaction and exploration. Using compliant materials on the robot also allows it to conform to complicated shapes passively in a way that would otherwise involve more complicated articulated or actuated mechanisms. Conforming to the objects allows us to increase the contact patch with the object and distribute the forces, usually creating a more robust grasp. These compliant surfaces allow us to research planning and control methods that might be less precise, rely less on accurate object localization, or use hardware with less precise control or sensing.

What’s it like to be hugged by Punyo?

Kate Tsui, Punyo HRI Tech Lead: Although Punyo isn’t a social robot, a surprising amount of emotion comes through its hug, and it feels quite comforting. A hug from Punyo feels like a long, sustained, snug squeeze from a close friend you haven’t seen for a long time and don’t want to let go.


A series of concept images shows situations in which whole body manipulation might be useful in the home.TRI

(Interview transcript ends.)

Softness seems like it could be a necessary condition for bipedal humanoids working in close proximity to humans, especially in commercial or home environments where interactions are less structured and predictable. “I think more robots using their whole body to manipulate is coming soon, especially with the recent explosion of humanoids outside of academic labs,” Alspach says. “Capable, general-purpose robotic manipulation is a competitive field, and using the whole body unlocks the ability to efficiently manipulate large, heavy, and unwieldy objects.”



When we think about robotic manipulation, the default is usually to think about grippers—about robots using manipulators (like fingers or other end effectors) to interact with objects. For most humans, though, interacting with objects can be a lot more complicated, and we use whatever body parts are convenient to help us deal with objects that are large or heavy or awkward.

This somewhat constrained definition of robotic manipulation isn’t robotics’ fault, really. The word “manipulation” itself comes from the Latin for getting handsy with stuff, so there’s a millennium or two’s-worth of hand-related inertia behind the term. The Los Altos, Calif.-based Toyota Research Institute (TRI) is taking a more expansive view with their new humanoid, Punyo, which uses its soft body to help it manipulate objects that would otherwise be pretty much impossible to manage with grippers alone.

“An anthropomorphic embodiment allows us to explore the complexities of social interactions like physical assistance, non-verbal communication, intent, predictability, and trust, to name just a few.” —Alex Alspach, Toyota Research Institute (TRI)

Punyo started off as just a squishy gripper at TRI, but the idea was always to scale up to a big squishy humanoid, hence this concept art of a squishified T-HR3:

This concept image shows what Toyota’s T-HR3 humanoid might look like when bubble-ized.TRI

“We use the term ‘bubble-ized,’ says Alex Alspach, Tech Lead for Punyo at TRI. Alspach tells us that the concept art above doesn’t necessarily reflect what the Punyo humanoid will eventually look like, but “it gave us some physical constraints and a design language. It also reinforced the idea that we are after general hardware and software solutions that can augment and enable both future and existing robots to take full advantage of their whole bodies for manipulation.”

This version of Punyo isn’t quite at “whole” body manipulation, but it can get a lot done using its arms and chest, which are covered with air bladders that provide both sensing and compliance:

Many of those motions look very human-like, because this is how humans manipulate things. Not to throw too much shade at all those humanoid warehouse robots, but as is pointed out in the video above, using just our hands outstretched in front of us to lift things is not how humans do it, because using other parts of our bodies to provide extra support makes lifting easier. This is not a trivial problem for robots, though, because interactions between point contacts that are rigid (like how most robotics manipulators handle the world) are fairly well understood. Once you throw big squishy surfaces into the mix, along with big squishy objects, it’s just not something that most robots are ready for.

“A soft robot does not interact with the world at a single point.” —Russ Tedrake, TRI

“Current robot manipulation evolved from big, strong industrial robots moving car parts and big tools with their end effectors,” Alspach says. “I think it’s wise to take inspiration from the human form—we are strong enough to perform most everyday tasks with our hands, but when a big, heavy object comes around, we need to get creative with how we wrap our arms around it and position our body to lift it.”

Robots are notorious for lifting big and heavy objects, primarily by manipulating them with robot-y form factors in robot-y ways. So what’s so great about the human form factor, anyway? This question goes way beyond Punyo, of course, but we wanted to get the Punyo team’s take on humanoids, and we tossed a couple more questions at them just for fun.

IEEE Spectrum: So why humanoids?

Alspach: The humanoid robot checks a few important boxes. First of all, the environments we intend to work in were built for humans, so the humanoid form helps a robot make use of the spaces and tools around it. Independently, multiple teams at TRI have converged on bi-manual systems for tasks like grocery shopping and food preparation. A chest between these arms is a simple addition that gives us useful contact surfaces for manipulating big objects, too. Furthermore, our Human-Robot Interaction (HRI) team has done, and continues to do, extensive research with older adults, the people we look forward to helping the most. An anthropomorphic embodiment allows us to explore the complexities of social interactions like physical assistance, non-verbal communication, intent, predictability, and trust, to name just a few.

“We focus not on highly precise tasks but on gross, whole-body manipulation, where robust strategies help stabilize and control objects, and a bit of sloppiness can be an asset.” —Alex Alspach, TRI

Does having a bubble-ized robot make anything more difficult for you?

Russ Tedrake, VP of Robotics Research: If you think of your robot as interacting with the world at a point—the standard view from e.g. impedance control—then putting a soft, passive spring in series between your robot and the world does limit performance. It reduces your control bandwidth. But that view misses the more important point. A soft robot does not interact with the world at a single point. Soft materials fundamentally change the dynamics of contact by deforming around the material—generating patch contacts that allow contact forces and moments not achievable by a rigid interaction.

Alspach: Punyo’s softness is extreme compared to other manipulation platforms that may, say, just have rubber pads on their arms or fingers. This compliance means that when we grab an object, it may not settle exactly where we planned for it to, or, for example, if we bump that object up against the edge of a table, it may move within our grasp. For these reasons, tactile sensing is an important part of our solution as we dig into how to measure and control the state of the objects we manipulate. We focus not on highly precise tasks but on gross, whole-body manipulation, where robust strategies help stabilize and control objects, and a bit of sloppiness can be an asset.

Compliance can be accomplished in different ways, including just in software. What’s the importance of having a robot that’s physically squishy rather than just one that acts squishily?

Andrew Beaulieu, Punyo Tech Lead: We do not believe that passive and active compliance should be considered mutually exclusive, and there are several advantages to having a physically squishy robot, especially when we consider having a robot operate near people and in their spaces. Having a robot that can safely make contact with the world opens up avenues of interaction and exploration. Using compliant materials on the robot also allows it to conform to complicated shapes passively in a way that would otherwise involve more complicated articulated or actuated mechanisms. Conforming to the objects allows us to increase the contact patch with the object and distribute the forces, usually creating a more robust grasp. These compliant surfaces allow us to research planning and control methods that might be less precise, rely less on accurate object localization, or use hardware with less precise control or sensing.

What’s it like to be hugged by Punyo?

Kate Tsui, Punyo HRI Tech Lead: Although Punyo isn’t a social robot, a surprising amount of emotion comes through its hug, and it feels quite comforting. A hug from Punyo feels like a long, sustained, snug squeeze from a close friend you haven’t seen for a long time and don’t want to let go.


A series of concept images shows situations in which whole body manipulation might be useful in the home.TRI

(Interview transcript ends.)

Softness seems like it could be a necessary condition for bipedal humanoids working in close proximity to humans, especially in commercial or home environments where interactions are less structured and predictable. “I think more robots using their whole body to manipulate is coming soon, especially with the recent explosion of humanoids outside of academic labs,” Alspach says. “Capable, general-purpose robotic manipulation is a competitive field, and using the whole body unlocks the ability to efficiently manipulate large, heavy, and unwieldy objects.”

This paper presents a novel webcam-based approach for gaze estimation on computer screens. Utilizing appearance based gaze estimation models, the system provides a method for mapping the gaze vector from the user’s perspective onto the computer screen. Notably, it determines the user’s 3D position in front of the screen, using only a 2D webcam without the need for additional markers or equipment. The study presents a comprehensive comparative analysis, assessing the performance of the proposed method against established eye tracking solutions. This includes a direct comparison with the purpose-built Tobii Eye Tracker 5, a high-end hardware solution, and the webcam-based GazeRecorder software. In experiments replicating head movements, especially those imitating yaw rotations, the study brings to light the inherent difficulties associated with tracking such motions using 2D webcams. This research introduces a solution by integrating Structure from Motion (SfM) into the Convolutional Neural Network (CNN) model. The study’s accomplishments include showcasing the potential for accurate screen gaze tracking with a simple webcam, presenting a novel approach for physical distance computation, and proposing compensation for head movements, laying the groundwork for advancements in real-world gaze estimation scenarios.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

RoboCup German Open: 17–21 April 2024, KASSEL, GERMANYAUVSI XPONENTIAL 2024: 22–25 April 2024, SAN DIEGO, CAEurobot Open 2024: 8–11 May 2024, LA ROCHE-SUR-YON, FRANCEICRA 2024: 13–17 May 2024, YOKOHAMA, JAPANRoboCup 2024: 17–22 July 2024, EINDHOVEN, NETHERLANDS

Enjoy today’s videos!

Columbia engineers build Emo, a silicon-clad robotic face that makes eye contact and uses two AI models to anticipate and replicate a person’s smile before the person actually smiles—a major advance in robots predicting human facial expressions accurately, improving interactions, and building trust between humans and robots.

[ Columbia ]

Researchers at Stanford University have invented a way to augment electric motors to make them much more efficient at performing dynamic movements through a new type of actuator, a device that uses energy to make things move. Their actuator, published 20 March in Science Robotics, uses springs and clutches to accomplish a variety of tasks with a fraction of the energy usage of a typical electric motor.

[ Stanford ]

I’m sorry, but the world does not need more drummers.

[ Fourier Intelligence ]

Always good to see NASA’s Valakyrie doing research.

[ NASA ]

In challenging terrains, constructing structures such as antennas and cable-car masts often requires the use of helicopters to transport loads via ropes.Challenging this paradigm, we present Geranos: a specialized multirotor Unmanned Aerial Vehicle (UAV) designed to enhance aerial transportation and assembly. Our experimental demonstration mimicking antenna/cable-car mast installations showcases Geranos ability in stacking poles (3 kilograms, 2 meters long) with remarkable sub-5 centimeter placement accuracy, without the need of human manual intervention.

[ Paper ]

Flyability’s Elios 2 in November 2020 helped researchers inspect Reactor 5 at the Chernobyl nuclear disaster site to determine whether any uranium was present in the area. Prior to this, Reactor 5 had not been investigated since the disaster in 1986.

[ Flyability ]

Various musculoskeletal humanoids have been developed so far. While these humanoids have the advantage of their flexible and redundant bodies that mimic the human body, they are still far from being applied to real-world tasks. One of the reasons for this is the difficulty of bipedal walking in a flexible body. Thus, we developed a musculoskeletal wheeled robot, Musashi-W, by combining a wheeled base and musculoskeletal upper limbs for real-world applications.

[ Paper ]

Thanks, Kento!

A recent trend in industrial robotics is to have robotic manipulators working side-by-side with human operators. A challenging aspect of this coexistence is that the robot is required to reliably solve complex path-planning problems in a dynamically changing environment. To ensure the safety of the human operator while simultaneously achieving efficient task realization, this paper introduces... a scheme [that] can steer the robot arm to the desired end-effector pose in the presence of actuator saturation, limited joint ranges, speed limits, a cluttered static obstacle environment, and moving human collaborators.

[ Paper ]

Thanks, Kelly!

Our mobile manipulator Digit worked continuously for 26 hours split over the 3.5 days of Modex 2024, in Atlanta. Everything was tracked and coordinated by our newest product, Agility Arc, a cloud automation platform.

[ Agility ]

We’re building robots that can keep people out of harm’s way: Spot enables operators to remotely investigate and de-escalate hazardous situations. Robots have been used in government and public safety applications for decades but Spot’s unmatched mobility and intuitive interface is changing incident response for departments in the field today.

[ Boston Dynamics ]

This paper presents a Bistable Aerial Transformer (BAT) robot, a novel morphing hybrid aerial vehicle (HAV) that switches between quadrotor and fixed-wing modes via rapid acceleration and without any additional actuation beyond those required for normal flight.

[ Paper ]

Disney’s Baymax frequently takes the spotlight in many research presentations dedicated to soft and secure physical human-robot interaction (pHRI). KIMLAB’s recent paper in TRO showcases a step towards realizing the Baymax concept by enveloping the skeletons of PAPRAS (Plug And Play Robotic Arm System) with soft skins and utilizing them for sensory functions.

[ Paper ]

Catch me if you can!

[ CVUT ]

Deep Reinforcement Learning (RL) has demonstrated impressive results in solving complex robotic tasks such as quadruped locomotion. Yet, current solvers fail to produce efficient policies respecting hard constraints. In this work, we advocate for integrating constraints into robot learning and present Constraints as Terminations (CaT), a novel constrained RL algorithm.

[ CaT ]

Why hasn’t the dream of having a robot at home to do your chores become a reality yet? With three decades of research expertise in the field, roboticist Ken Goldberg sheds light on the clumsy truth about robots—and what it will take to build more dexterous machines to work in a warehouse or help out at home.

[ TED ]

Designed as a technology demonstration that would perform up to five experimental test flights over a span of 30 days, the Mars helicopter surpassed expectations—repeatedly—only recently completing its mission after having logged an incredible 72 flights over nearly three years. Join us for a live talk to learn how Ingenuity’s team used resourcefulness and creativity to transform the rotorcraft from a successful tech demo into a helpful scout for the Perseverance rover, ultimately proving the value of aerial exploration for future interplanetary missions.

[ JPL ]

Please join us for a lively panel discussion featuring GRASP Faculty members Dr. Pratik Chaudhari, Dr. Dinesh Jayaraman, and Dr. Michael Posa. This panel will be moderated by Dr. Kostas Daniilidis around the current hot topic of AI Embodied in Robotics.

[ Penn Engineering ]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

RoboCup German Open: 17–21 April 2024, KASSEL, GERMANYAUVSI XPONENTIAL 2024: 22–25 April 2024, SAN DIEGO, CAEurobot Open 2024: 8–11 May 2024, LA ROCHE-SUR-YON, FRANCEICRA 2024: 13–17 May 2024, YOKOHAMA, JAPANRoboCup 2024: 17–22 July 2024, EINDHOVEN, NETHERLANDS

Enjoy today’s videos!

Columbia engineers build Emo, a silicon-clad robotic face that makes eye contact and uses two AI models to anticipate and replicate a person’s smile before the person actually smiles—a major advance in robots predicting human facial expressions accurately, improving interactions, and building trust between humans and robots.

[ Columbia ]

Researchers at Stanford University have invented a way to augment electric motors to make them much more efficient at performing dynamic movements through a new type of actuator, a device that uses energy to make things move. Their actuator, published 20 March in Science Robotics, uses springs and clutches to accomplish a variety of tasks with a fraction of the energy usage of a typical electric motor.

[ Stanford ]

I’m sorry, but the world does not need more drummers.

[ Fourier Intelligence ]

Always good to see NASA’s Valakyrie doing research.

[ NASA ]

In challenging terrains, constructing structures such as antennas and cable-car masts often requires the use of helicopters to transport loads via ropes.Challenging this paradigm, we present Geranos: a specialized multirotor Unmanned Aerial Vehicle (UAV) designed to enhance aerial transportation and assembly. Our experimental demonstration mimicking antenna/cable-car mast installations showcases Geranos ability in stacking poles (3 kilograms, 2 meters long) with remarkable sub-5 centimeter placement accuracy, without the need of human manual intervention.

[ Paper ]

Flyability’s Elios 2 in November 2020 helped researchers inspect Reactor 5 at the Chernobyl nuclear disaster site to determine whether any uranium was present in the area. Prior to this, Reactor 5 had not been investigated since the disaster in 1986.

[ Flyability ]

Various musculoskeletal humanoids have been developed so far. While these humanoids have the advantage of their flexible and redundant bodies that mimic the human body, they are still far from being applied to real-world tasks. One of the reasons for this is the difficulty of bipedal walking in a flexible body. Thus, we developed a musculoskeletal wheeled robot, Musashi-W, by combining a wheeled base and musculoskeletal upper limbs for real-world applications.

[ Paper ]

Thanks, Kento!

A recent trend in industrial robotics is to have robotic manipulators working side-by-side with human operators. A challenging aspect of this coexistence is that the robot is required to reliably solve complex path-planning problems in a dynamically changing environment. To ensure the safety of the human operator while simultaneously achieving efficient task realization, this paper introduces... a scheme [that] can steer the robot arm to the desired end-effector pose in the presence of actuator saturation, limited joint ranges, speed limits, a cluttered static obstacle environment, and moving human collaborators.

[ Paper ]

Thanks, Kelly!

Our mobile manipulator Digit worked continuously for 26 hours split over the 3.5 days of Modex 2024, in Atlanta. Everything was tracked and coordinated by our newest product, Agility Arc, a cloud automation platform.

[ Agility ]

We’re building robots that can keep people out of harm’s way: Spot enables operators to remotely investigate and de-escalate hazardous situations. Robots have been used in government and public safety applications for decades but Spot’s unmatched mobility and intuitive interface is changing incident response for departments in the field today.

[ Boston Dynamics ]

This paper presents a Bistable Aerial Transformer (BAT) robot, a novel morphing hybrid aerial vehicle (HAV) that switches between quadrotor and fixed-wing modes via rapid acceleration and without any additional actuation beyond those required for normal flight.

[ Paper ]

Disney’s Baymax frequently takes the spotlight in many research presentations dedicated to soft and secure physical human-robot interaction (pHRI). KIMLAB’s recent paper in TRO showcases a step towards realizing the Baymax concept by enveloping the skeletons of PAPRAS (Plug And Play Robotic Arm System) with soft skins and utilizing them for sensory functions.

[ Paper ]

Catch me if you can!

[ CVUT ]

Deep Reinforcement Learning (RL) has demonstrated impressive results in solving complex robotic tasks such as quadruped locomotion. Yet, current solvers fail to produce efficient policies respecting hard constraints. In this work, we advocate for integrating constraints into robot learning and present Constraints as Terminations (CaT), a novel constrained RL algorithm.

[ CaT ]

Why hasn’t the dream of having a robot at home to do your chores become a reality yet? With three decades of research expertise in the field, roboticist Ken Goldberg sheds light on the clumsy truth about robots—and what it will take to build more dexterous machines to work in a warehouse or help out at home.

[ TED ]

Designed as a technology demonstration that would perform up to five experimental test flights over a span of 30 days, the Mars helicopter surpassed expectations—repeatedly—only recently completing its mission after having logged an incredible 72 flights over nearly three years. Join us for a live talk to learn how Ingenuity’s team used resourcefulness and creativity to transform the rotorcraft from a successful tech demo into a helpful scout for the Perseverance rover, ultimately proving the value of aerial exploration for future interplanetary missions.

[ JPL ]

Please join us for a lively panel discussion featuring GRASP Faculty members Dr. Pratik Chaudhari, Dr. Dinesh Jayaraman, and Dr. Michael Posa. This panel will be moderated by Dr. Kostas Daniilidis around the current hot topic of AI Embodied in Robotics.

[ Penn Engineering ]



At NVIDIA GTC last week, Boston Dynamics CTO Aaron Saunders gave a talk about deploying AI in real world robots—namely, how Spot is leveraging reinforcement learning to get better at locomotion (We spoke with Saunders last year about robots falling over). And Spot has gotten a lot better—a Spot robot takes a tumble on average once every 50 kilometers, even as the Spot fleet collectively walks enough to circle the Earth every three months.

That fleet consists of a lot of commercial deployments, which is impressive for any mobile robot, but part of the reason for that is because the current version of Spot is really not intended for robotics research, even though over 100 universities are home to at least one Spot. Boston Dynamics has not provided developer access to Spot’s joints, meaning that anyone who has wanted to explore quadrupedal mobility has had to find some other platform that’s a bit more open and allows for some experimentation.

Boston Dynamics is now announcing a new variant of Spot that includes a low-level application programming interface (API) that gives joint-level control of the robot. This will give (nearly) full control over how Spot moves its legs, which is a huge opportunity for the robotics community, since we’ll now be able to find out exactly what Spot is capable of. For example, we’ve already heard from a credible source that Spot is capable of running much, much faster than Boston Dynamics has publicly shown, and it’s safe to assume that a speedier Spot is just the start.

An example of a new Spot capability when a custom locomotion controller can be used on the robot.Boston Dynamics

When you buy a Spot robot from Boston Dynamics, it arrives already knowing how to walk. It’s very, very good at walking. Boston Dynamics is so confident in Spot’s walking ability that you’re only allowed high-level control of the robot: You tell it where to go, it decides how to get there. If you want to do robotics research using Spot as a mobility platform, that’s totally fine, but if you want to do research on quadrupedal locomotion, it hasn’t been possible with Spot. But that’s changing.

The Spot RL Researcher Kit is a collaboration between Boston Dynamics, Nvidia, and the AI Institute. It includes a joint-level control API, an Nvidia Jetson AGX Orin payload, and a simulation environment for Spot based on Nvidia Isaac Lab. The kit will be officially released later this year, but Boston Dynamics is starting a slow rollout through an early adopter beta program.

From a certain perspective, Boston Dynamics did this whole thing with Spot backwards by first creating a commercial product and only then making it into a research platform. “At the beginning, we felt like it would be great to include that research capability, but that it wasn’t going to drive the adoption of this technology,” Saunders told us after his GTC session. Instead, Boston Dynamics first focused on getting lots of Spots out into the world in a useful way, and only now, when the company feels like they’ve gotten there, is the time right to unleash a fully-featured research version of Spot. “It was really just getting comfortable with our current product that enabled us to go back and say, ‘how can we now provide people with the kind of access that they’re itching for?’”

Getting to this point has taken a huge amount of work for Boston Dynamics. Predictably, Spot started out as a novelty for most early adopters, becoming a project for different flavors of innovation groups within businesses rather than an industrial asset. “I think there’s been a change there,” Saunders says. “We’re working with operational customers a lot more, and the composure of our sales is shifting away from being dominated by early adopters and we’re starting to see repeat sales and interest in larger fleets of robots.”

Deploying and supporting a large fleet of Spots is one of the things that allowed Boston Dynamics to feel comfortable offering a research version. Researchers are not particularly friendly to their robots, because the goal of research is often to push the envelope of what’s possible. And part of that process includes getting very well acquainted with what turns out to be not possible, resulting in robots that end up on the floor, sometimes in pieces. The research version of Spot will include a mandatory Spot Care Service Plan, which exists to serve commercial customers but will almost certainly provide more value to the research community who want to see what kinds of crazy things they can get Spot to do.

Exactly how crazy those crazy things will be remains to be seen. Boston Dynamics is starting out with a beta program for the research Spots partially because they’re not quite sure yet how many safeguards to put in place within the API. “We need to see where the problems are,” Saunders says. “We still have a little work to do to really hone in how our customers are going to use it.” Deciding how much Spot should be able to put itself at risk in the name of research may be a difficult question to answer, but I’m pretty sure that the beta program participants are going to do their best to find out how much tolerance Boston Dynamics has for Spot shenanigans. I just hope that whatever happens, they share as much video of it as possible.

The Spot Early Adopter Program for the new RL Researcher Kit is open for applications here.



At NVIDIA GTC last week, Boston Dynamics CTO Aaron Saunders gave a talk about deploying AI in real world robots—namely, how Spot is leveraging reinforcement learning to get better at locomotion (We spoke with Saunders last year about robots falling over). And Spot has gotten a lot better—a Spot robot takes a tumble on average once every 50 kilometers, even as the Spot fleet collectively walks enough to circle the Earth every three months.

That fleet consists of a lot of commercial deployments, which is impressive for any mobile robot, but part of the reason for that is because the current version of Spot is really not intended for robotics research, even though over 100 universities are home to at least one Spot. Boston Dynamics has not provided developer access to Spot’s joints, meaning that anyone who has wanted to explore quadrupedal mobility has had to find some other platform that’s a bit more open and allows for some experimentation.

Boston Dynamics is now announcing a new variant of Spot that includes a low-level application programming interface (API) that gives joint-level control of the robot. This will give (nearly) full control over how Spot moves its legs, which is a huge opportunity for the robotics community, since we’ll now be able to find out exactly what Spot is capable of. For example, we’ve already heard from a credible source that Spot is capable of running much, much faster than Boston Dynamics has publicly shown, and it’s safe to assume that a speedier Spot is just the start.

An example of a new Spot capability when a custom locomotion controller can be used on the robot.Boston Dynamics

When you buy a Spot robot from Boston Dynamics, it arrives already knowing how to walk. It’s very, very good at walking. Boston Dynamics is so confident in Spot’s walking ability that you’re only allowed high-level control of the robot: You tell it where to go, it decides how to get there. If you want to do robotics research using Spot as a mobility platform, that’s totally fine, but if you want to do research on quadrupedal locomotion, it hasn’t been possible with Spot. But that’s changing.

The Spot RL Researcher Kit is a collaboration between Boston Dynamics, Nvidia, and the AI Institute. It includes a joint-level control API, an Nvidia Jetson AGX Orin payload, and a simulation environment for Spot based on Nvidia Isaac Lab. The kit will be officially released later this year, but Boston Dynamics is starting a slow rollout through an early adopter beta program.

From a certain perspective, Boston Dynamics did this whole thing with Spot backwards by first creating a commercial product and only then making it into a research platform. “At the beginning, we felt like it would be great to include that research capability, but that it wasn’t going to drive the adoption of this technology,” Saunders told us after his GTC session. Instead, Boston Dynamics first focused on getting lots of Spots out into the world in a useful way, and only now, when the company feels like they’ve gotten there, is the time right to unleash a fully-featured research version of Spot. “It was really just getting comfortable with our current product that enabled us to go back and say, ‘how can we now provide people with the kind of access that they’re itching for?’”

Getting to this point has taken a huge amount of work for Boston Dynamics. Predictably, Spot started out as a novelty for most early adopters, becoming a project for different flavors of innovation groups within businesses rather than an industrial asset. “I think there’s been a change there,” Saunders says. “We’re working with operational customers a lot more, and the composure of our sales is shifting away from being dominated by early adopters and we’re starting to see repeat sales and interest in larger fleets of robots.”

Deploying and supporting a large fleet of Spots is one of the things that allowed Boston Dynamics to feel comfortable offering a research version. Researchers are not particularly friendly to their robots, because the goal of research is often to push the envelope of what’s possible. And part of that process includes getting very well acquainted with what turns out to be not possible, resulting in robots that end up on the floor, sometimes in pieces. The research version of Spot will include a mandatory Spot Care Service Plan, which exists to serve commercial customers but will almost certainly provide more value to the research community who want to see what kinds of crazy things they can get Spot to do.

Exactly how crazy those crazy things will be remains to be seen. Boston Dynamics is starting out with a beta program for the research Spots partially because they’re not quite sure yet how many safeguards to put in place within the API. “We need to see where the problems are,” Saunders says. “We still have a little work to do to really hone in how our customers are going to use it.” Deciding how much Spot should be able to put itself at risk in the name of research may be a difficult question to answer, but I’m pretty sure that the beta program participants are going to do their best to find out how much tolerance Boston Dynamics has for Spot shenanigans. I just hope that whatever happens, they share as much video of it as possible.

The Spot Early Adopter Program for the new RL Researcher Kit is open for applications here.

In recent years, virtual idols have garnered considerable attention because they can perform activities similar to real idols. However, as they are fictitious idols with nonphysical presence, they cannot perform physical interactions such as handshake. Combining a robotic hand with a display showing virtual idols is the one of the methods to solve this problem. Nonetheless a physical handshake is possible, the form of handshake that can effectively induce the desirable behavior is unclear. In this study, we adopted a robotic hand as an interface and aimed to imitate the behavior of real idols. To test the effects of this behavior, we conducted step-wise experiments. The series of experiments revealed that the handshake by the robotic hand increased the feeling of intimacy toward the virtual idol, and it became more enjoyable to respond to a request from the virtual idol. In addition, viewing the virtual idols during the handshake increased the feeling of intimacy with the virtual idol. Moreover, the method of the hand-shake peculiar to idols, which tried to keep holding the user’s hand after the conversation, increased the feeling of intimacy to the virtual idol.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

Eurobot Open 2024: 8–11 May 2024, LA ROCHE-SUR-YON, FRANCEICRA 2024: 13–17 May 2024, YOKOHAMA, JAPANRoboCup 2024: 17–22 July 2024, EINDHOVEN, NETHERLANDS

Enjoy today’s videos!

See NVIDIA’s journey from pioneering advanced autonomous vehicle hardware and simulation tools to accelerated perception and manipulation for autonomous mobile robots and industrial arms, culminating in the next wave of cutting-edge AI for humanoid robots.

[ NVIDIA ]

In release 4.0, we advanced Spot’s locomotion abilities thanks to the power of reinforcement learning. Paul Domanico, Robotics Engineer at Boston Dynamics talks through how Spot’s hybrid approach of combining reinforcement learning with model predictive control creates an even more stable robot in the most antagonistic environments.

[ Boston Dynamics ]

We’re excited to share our latest progress on teaching EVEs general-purpose skills. Everything in the video is all autonomous, all 1X speed, all controlled with a single set of neural network weights.

[ 1X ]

What I find interesting about the Unitree H1 doing a standing flip is where it decides to put its legs.

[ Unitree ]

At the MODEX Exposition in March of 2024, Pickle Robot demonstrated picking freight from a random pile similar to what you see in a messy truck trailer after it has bounced across many miles of highway. The piles of boxes were never the same and the demonstration was run live in front of crowds of onlookers 25 times over 4 days. No other robotic trailer/container unloading system has yet to demonstrate this ability to pick from unstructured piles.

[ Pickle ]

RunRu is a car-like robot, a robot-like car, with autonomy, sociability, and operability. This is a new type of personal vehicle that aims to create a “Jinba-Ittai” relationship with its passengers, who are not only always assertive, but also sometimes whine.

[ ICD-LAB ]

Verdie went to GTC this year and won the hearts of people but maybe not the other robots.

[ Electric Sheep ]

The “DEEPRobotics AI+” merges AI capabilities with robotic software systems to continuously boost embodied intelligence. The showcased achievement is a result of training a new AI and software system.

[ DEEP Robotics ]

If you want to collect data for robot grasping, using Stretch and a pair of tongs is about as affordable as it gets.

[ Hello Robot ]

The real reason why Digit’s legs look backwards is so that it doesn’t bang its shins taking GPUs out of the oven.

Meanwhile, some of us can bake our GPUs without even needing an oven.

[ Agility ]

P1 is LimX Dynamics’ innovative point-foot biped robot, serving as an important platform for the systematic development and modular testing of reinforcement learning. It is utilized to advance the research and iteration of basic biped locomotion abilities. The success of P1 in conquering forest terrain is a testament to LimX Dynamics’ systematic R&D in reinforcement learning.

[ LimX ]

And now, this.

[ Suzumori Endo Lab ]

Cooking in kitchens is fun. BUT doing it collaboratively with two robots is even more satisfying! We introduce MOSAIC, a modular framework that coordinates multiple robots to closely collaborate and cook with humans via natural language interaction and a repository of skills.

[ Cornell ]

neoDavid is a Robust Humanoid with Dexterous Manipulation Skills, developed at DLR. The main focus in the development of neoDavid is to get as close to human capabilities as possible—especially in terms of dynamics, dexterity and robustness.

[ DLR ]

Welcome to our customer spotlight video series where we showcase some of the remarkable robots that our customers have been working on. In this episode we showcase three Clearpath Robotics UGVs that our customers are using to create robotic assistants for three different applications.

[ Clearpath ]

This video presents KIMLAB’s new three-fingered robotic hand, featuring soft tactile sensors for enhanced grasping capabilities. Leveraging cost-effective 3D printing materials, it ensures robustness and operational efficiency.

[ KIMLAB ]

Various perception-aware planning approaches have attempted to enhance the state estimation accuracy during maneuvers, while the feature matchability among frames, a crucial factor influencing estimation accuracy, has often been overlooked. In this paper, we present APACE, an Agile and Perception-Aware trajeCtory gEneration framework for quadrotors aggressive flight, that takes into account feature matchability during trajectory planning.

[ Paper ] via [ HKUST ]

In this video, we see Samuel Kunz, the pilot of the RSL Assistance Robot Race team from ETH Zurich, as he participates in the CYBATHLON Challenges 2024. Samuel completed all four designated tasks—retrieving a parcel from a mailbox, using a toothbrush, hanging a scarf on a clothesline, and emptying a dishwasher—with the help of an assistance robot. He achieved a perfect score of 40 out of 40 points and secured first place in the race, completing the tasks in 6.34 minutes.

[ CYBATHLON ]

Florian Ledoux is a wildlife photographer with a deep love for the Arctic and its wildlife. Using the Mavic 3 Pro, he steps onto the ice ready to capture the raw beauty and the stories of this chilly, remote place.

[ DJI ]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

Eurobot Open 2024: 8–11 May 2024, LA ROCHE-SUR-YON, FRANCEICRA 2024: 13–17 May 2024, YOKOHAMA, JAPANRoboCup 2024: 17–22 July 2024, EINDHOVEN, NETHERLANDS

Enjoy today’s videos!

See NVIDIA’s journey from pioneering advanced autonomous vehicle hardware and simulation tools to accelerated perception and manipulation for autonomous mobile robots and industrial arms, culminating in the next wave of cutting-edge AI for humanoid robots.

[ NVIDIA ]

In release 4.0, we advanced Spot’s locomotion abilities thanks to the power of reinforcement learning. Paul Domanico, Robotics Engineer at Boston Dynamics talks through how Spot’s hybrid approach of combining reinforcement learning with model predictive control creates an even more stable robot in the most antagonistic environments.

[ Boston Dynamics ]

We’re excited to share our latest progress on teaching EVEs general-purpose skills. Everything in the video is all autonomous, all 1X speed, all controlled with a single set of neural network weights.

[ 1X ]

What I find interesting about the Unitree H1 doing a standing flip is where it decides to put its legs.

[ Unitree ]

At the MODEX Exposition in March of 2024, Pickle Robot demonstrated picking freight from a random pile similar to what you see in a messy truck trailer after it has bounced across many miles of highway. The piles of boxes were never the same and the demonstration was run live in front of crowds of onlookers 25 times over 4 days. No other robotic trailer/container unloading system has yet to demonstrate this ability to pick from unstructured piles.

[ Pickle ]

RunRu is a car-like robot, a robot-like car, with autonomy, sociability, and operability. This is a new type of personal vehicle that aims to create a “Jinba-Ittai” relationship with its passengers, who are not only always assertive, but also sometimes whine.

[ ICD-LAB ]

Verdie went to GTC this year and won the hearts of people but maybe not the other robots.

[ Electric Sheep ]

The “DEEPRobotics AI+” merges AI capabilities with robotic software systems to continuously boost embodied intelligence. The showcased achievement is a result of training a new AI and software system.

[ DEEP Robotics ]

If you want to collect data for robot grasping, using Stretch and a pair of tongs is about as affordable as it gets.

[ Hello Robot ]

The real reason why Digit’s legs look backwards is so that it doesn’t bang its shins taking GPUs out of the oven.

Meanwhile, some of us can bake our GPUs without even needing an oven.

[ Agility ]

P1 is LimX Dynamics’ innovative point-foot biped robot, serving as an important platform for the systematic development and modular testing of reinforcement learning. It is utilized to advance the research and iteration of basic biped locomotion abilities. The success of P1 in conquering forest terrain is a testament to LimX Dynamics’ systematic R&D in reinforcement learning.

[ LimX ]

And now, this.

[ Suzumori Endo Lab ]

Cooking in kitchens is fun. BUT doing it collaboratively with two robots is even more satisfying! We introduce MOSAIC, a modular framework that coordinates multiple robots to closely collaborate and cook with humans via natural language interaction and a repository of skills.

[ Cornell ]

neoDavid is a Robust Humanoid with Dexterous Manipulation Skills, developed at DLR. The main focus in the development of neoDavid is to get as close to human capabilities as possible—especially in terms of dynamics, dexterity and robustness.

[ DLR ]

Welcome to our customer spotlight video series where we showcase some of the remarkable robots that our customers have been working on. In this episode we showcase three Clearpath Robotics UGVs that our customers are using to create robotic assistants for three different applications.

[ Clearpath ]

This video presents KIMLAB’s new three-fingered robotic hand, featuring soft tactile sensors for enhanced grasping capabilities. Leveraging cost-effective 3D printing materials, it ensures robustness and operational efficiency.

[ KIMLAB ]

Various perception-aware planning approaches have attempted to enhance the state estimation accuracy during maneuvers, while the feature matchability among frames, a crucial factor influencing estimation accuracy, has often been overlooked. In this paper, we present APACE, an Agile and Perception-Aware trajeCtory gEneration framework for quadrotors aggressive flight, that takes into account feature matchability during trajectory planning.

[ Paper ] via [ HKUST ]

In this video, we see Samuel Kunz, the pilot of the RSL Assistance Robot Race team from ETH Zurich, as he participates in the CYBATHLON Challenges 2024. Samuel completed all four designated tasks—retrieving a parcel from a mailbox, using a toothbrush, hanging a scarf on a clothesline, and emptying a dishwasher—with the help of an assistance robot. He achieved a perfect score of 40 out of 40 points and secured first place in the race, completing the tasks in 6.34 minutes.

[ CYBATHLON ]

Florian Ledoux is a wildlife photographer with a deep love for the Arctic and its wildlife. Using the Mavic 3 Pro, he steps onto the ice ready to capture the raw beauty and the stories of this chilly, remote place.

[ DJI ]

Automated disassembly is increasingly in focus for Recycling, Re-use, and Remanufacturing (Re-X) activities. Trends in digitalization, in particular digital twin (DT) technologies and the digital product passport, as well as recently proposed European legislation such as the Net Zero and the Critical materials Acts will accelerate digitalization of product documentation and factory processes. In this contribution we look beyond these activities by discussing digital information for stakeholders at the Re-X segment of the value-chain. Furthermore, we present an approach to automated product disassembly based on different levels of available product information. The challenges for automated disassembly and the subsequent requirements on modeling of disassembly processes and product states for electronic waste are examined. The authors use a top-down (e.g., review of existing standards and process definitions) methodology to define an initial data model for disassembly processes. An additional bottom-up approach, whereby 5 exemplary electronics products were manually disassembled, was employed to analyze the efficacy of the initial data model and to offer improvements. This paper reports on our suggested informal data models for automatic electronics disassembly and the associated robotic skills.

The targeted use of social robots for the family demands a better understanding of multiple stakeholders’ privacy concerns, including those of parents and children. Through a co-learning workshop which introduced families to the functions and hypothetical use of social robots in the home, we present preliminary evidence from 6 families that exhibits how parents and children have different comfort levels with robots collecting and sharing information across different use contexts. Conversations and booklet answers reveal that parents adopted their child’s decision in scenarios where they expect children to have more agency, such as in cases of homework completion or cleaning up toys, and when children proposed what their parents found to be acceptable reasoning for their decisions. Families expressed relief when they shared the same reasoning when coming to conclusive decisions, signifying an agreement of boundary management between the robot and the family. In cases where parents and children did not agree, they rejected a binary, either-or decision and opted for a third type of response, reflecting skepticism, uncertainty and/or compromise. Our work highlights the benefits of involving parents and children in child- and family-centered research, including parental abilities to provide cognitive scaffolding and personalize hypothetical scenarios for their children.

Introduction: The teaching process plays a crucial role in the training of professionals. Traditional classroom-based teaching methods, while foundational, often struggle to effectively motivate students. The integration of interactive learning experiences, such as visuo-haptic simulators, presents an opportunity to enhance both student engagement and comprehension.

Methods: In this study, three simulators were developed to explore the impact of visuo-haptic simulations on engineering students’ engagement and their perceptions of learning basic physics concepts. The study used an adapted end-user computing satisfaction questionnaire to assess students’ experiences and perceptions of the simulators’ usability and its utility in learning.

Results: Feedback from participants suggests a positive reception towards the use of visuo-haptic simulators, highlighting their usefulness in improving the understanding of complex physics principles.

Discussion: Results suggest that incorporating visuo-haptic simulations into educational contexts may offer significant benefits, particularly in STEM courses, where traditional methods may be limited. The positive responses from participants underscore the potential of computer simulations to innovate pedagogical strategies. Future research will focus on assessing the effectiveness of these simulators in enhancing students’ learning and understanding of these concepts in higher-education physics courses.



Applying electricity for a few seconds to a soft material, such as a slice of raw tomato or chicken, can strongly bond it to a hard object, such as a graphite slab, without any tape or glue, a new study finds. This unexpected effect is also reversible—switching the direction of the electric current often easily separates the materials, scientists at the University of Maryland say. Potential applications for such “electroadhesion,” which can even work underwater, may include improved biomedical implants and biologically inspired robots.

“It is surprising that this effect was not discovered earlier,” says Srinivasa Raghavan, a professor of chemical and biomolecular engineering at the University of Maryland. “This is a discovery that could have been made pretty much since we’ve had batteries.”

In nature, soft materials such as living tissues are often bonded to hard objects such as bones. Previous research explored chemical ways to accomplish this feat, such as with glues that mimic how mussels stick to rocks and boats. However, these bonds are usually irreversible.

They tried a number of different soft materials, such as tomato, apple, beef, chicken, pork and gelatin...

Previously, Raghavan and his colleagues discovered that electricity could make gels stick to biological tissue, a discovery that might one day lead to gel patches that can help repair wounds. In the new study, instead of bonding two soft materials together, they explored whether electricity could make a soft material stick to a hard object.

The scientists began with a pair of graphite electrodes (consisting of an anode and a cathode) and an acrylamide gel. They applied five volts across the gel for three minutes. Surprisingly, they found the gel strongly bonded onto the graphite anode. Attempts to wrench the gel and electrode apart would typically break the gel, leaving pieces of it on the electrode. The bond could apparently last indefinitely after the voltage was removed, with the researchers keeping samples of gel and electrode stuck together for months.

Howeve, when the researchers switched the polarity of the current, the acrylamide gel detached from the anode. Instead, it adhered onto the other electrode.

Raghavan and his colleagues experimented with this newfound electroadhesion effect a number of different ways. They tried a number of different soft materials, such as tomato, apple, beef, chicken, pork and gelatin, as well as different electrodes, such as copper, lead, tin, nickel, iron, zinc and titanium. They also varied the strength of the voltage and the amount of time it was applied.

The researchers found the amount of salt in the soft material played a strong role in the electroadhesion effect. The salt makes the soft material conductive, and high concentrations of salt could lead gels to adhere to electrodes within seconds.

“It’s surprising how simple this effect is, and how widespread it might be”

The scientists also discovered that metals that are better at giving up their electrons, such as copper, lead and tin, are better at electroadhesion. Conversely, metals that hold onto their electrons strongly, such as nickel, iron, zinc and titanium, fared poorly.

These findings suggest that electroadhesion arises from chemical bonds between the electrode and soft material after they exchange electrons. Depending on the nature of the hard and soft materials, adhesion happened at the anode, cathode, both electrodes, or neither. Boosting the strength of the voltage and the amount of time it was applied typically increased adhesion strength.

“It’s surprising how simple this effect is, and how widespread it might be,” Raghavan says.

Potential applications for electroadhesion may include improving biomedical implants—the ability to bond tissue to steel or titanium could help reinforce implants, the researchers say. Electroadhesion may also help create biologically inspired robots with stiff bone-like skeletons and soft muscle-like elements, they add. They also suggest electroadhesion could lead to new kinds of batteries where soft electrolytes are bonded to hard electrodes, although it’s not clear if such adhesion would make much of a difference to a battery’s performance, Raghavan says.

The researchers also discovered that electroadhesion could occur underwater, which they suggest could open up an even wider range of possible applications for this effect. Typical adhesives do not work underwater, since many cannot spread onto solid surfaces that are submerged in liquids, and even those that can usually only form weak adhesive bonds due to interference from the liquid.

“It’s hard for me to pinpoint one real application for this discovery,” Raghavan says. “It reminds me of the researchers who made the discoveries behind Velcro or Post-it notes—the applications were not obvious to them when the discoveries were made, but the applications did arise over time.”

The scientists detailed their findings online 13 March in the journal ACS Central Science.



Applying electricity for a few seconds to a soft material, such as a slice of raw tomato or chicken, can strongly bond it to a hard object, such as a graphite slab, without any tape or glue, a new study finds. This unexpected effect is also reversible—switching the direction of the electric current often easily separates the materials, scientists at the University of Maryland say. Potential applications for such “electroadhesion,” which can even work underwater, may include improved biomedical implants and biologically inspired robots.

“It is surprising that this effect was not discovered earlier,” says Srinivasa Raghavan, a professor of chemical and biomolecular engineering at the University of Maryland. “This is a discovery that could have been made pretty much since we’ve had batteries.”

In nature, soft materials such as living tissues are often bonded to hard objects such as bones. Previous research explored chemical ways to accomplish this feat, such as with glues that mimic how mussels stick to rocks and boats. However, these bonds are usually irreversible.

They tried a number of different soft materials, such as tomato, apple, beef, chicken, pork and gelatin...

Previously, Raghavan and his colleagues discovered that electricity could make gels stick to biological tissue, a discovery that might one day lead to gel patches that can help repair wounds. In the new study, instead of bonding two soft materials together, they explored whether electricity could make a soft material stick to a hard object.

The scientists began with a pair of graphite electrodes (consisting of an anode and a cathode) and an acrylamide gel. They applied five volts across the gel for three minutes. Surprisingly, they found the gel strongly bonded onto the graphite anode. Attempts to wrench the gel and electrode apart would typically break the gel, leaving pieces of it on the electrode. The bond could apparently last indefinitely after the voltage was removed, with the researchers keeping samples of gel and electrode stuck together for months.

Howeve, when the researchers switched the polarity of the current, the acrylamide gel detached from the anode. Instead, it adhered onto the other electrode.

Raghavan and his colleagues experimented with this newfound electroadhesion effect a number of different ways. They tried a number of different soft materials, such as tomato, apple, beef, chicken, pork and gelatin, as well as different electrodes, such as copper, lead, tin, nickel, iron, zinc and titanium. They also varied the strength of the voltage and the amount of time it was applied.

The researchers found the amount of salt in the soft material played a strong role in the electroadhesion effect. The salt makes the soft material conductive, and high concentrations of salt could lead gels to adhere to electrodes within seconds.

“It’s surprising how simple this effect is, and how widespread it might be”

The scientists also discovered that metals that are better at giving up their electrons, such as copper, lead and tin, are better at electroadhesion. Conversely, metals that hold onto their electrons strongly, such as nickel, iron, zinc and titanium, fared poorly.

These findings suggest that electroadhesion arises from chemical bonds between the electrode and soft material after they exchange electrons. Depending on the nature of the hard and soft materials, adhesion happened at the anode, cathode, both electrodes, or neither. Boosting the strength of the voltage and the amount of time it was applied typically increased adhesion strength.

“It’s surprising how simple this effect is, and how widespread it might be,” Raghavan says.

Potential applications for electroadhesion may include improving biomedical implants—the ability to bond tissue to steel or titanium could help reinforce implants, the researchers say. Electroadhesion may also help create biologically inspired robots with stiff bone-like skeletons and soft muscle-like elements, they add. They also suggest electroadhesion could lead to new kinds of batteries where soft electrolytes are bonded to hard electrodes, although it’s not clear if such adhesion would make much of a difference to a battery’s performance, Raghavan says.

The researchers also discovered that electroadhesion could occur underwater, which they suggest could open up an even wider range of possible applications for this effect. Typical adhesives do not work underwater, since many cannot spread onto solid surfaces that are submerged in liquids, and even those that can usually only form weak adhesive bonds due to interference from the liquid.

“It’s hard for me to pinpoint one real application for this discovery,” Raghavan says. “It reminds me of the researchers who made the discoveries behind Velcro or Post-it notes—the applications were not obvious to them when the discoveries were made, but the applications did arise over time.”

The scientists detailed their findings online 13 March in the journal ACS Central Science.



Nvidia’s ongoing GTC developer conference in San Jose is, unsurprisingly, almost entirely about AI this year. But in between the AI developments, Nvidia has also made a couple of significant robotics announcements.

First, there’s Project GR00T (with each letter and number pronounced individually so as not to invoke the wrath of Disney), a foundation model for humanoid robots. And secondly, Nvidia has committed to be the founding platinum member of the Open Source Robotics Alliance, a new initiative from the Open Source Robotics Foundation intended to make sure that the Robot Operating System (ROS), a collection of open-source software libraries and tools, has the support that it needs to flourish.

GR00T

First, let’s talk about GR00T (short for “Generalist Robot 00 Technology”). The way that Nvidia presenters enunciated it letter-by-letter during their talks strongly suggests that in private they just say “Groot.” So the rest of us can also just say “Groot” as far as I’m concerned.

As a “general-purpose foundation model for humanoid robots,” GR00T is intended to provide a starting point for specific humanoid robots to do specific tasks. As you might expect from something being presented for the first time at an Nvidia keynote, it’s awfully vague at the moment, and we’ll have to get into it more later on. Here’s pretty much everything useful that Nvidia has told us so far:

“Building foundation models for general humanoid robots is one of the most exciting problems to solve in AI today,” said Jensen Huang, founder and CEO of NVIDIA. “The enabling technologies are coming together for leading roboticists around the world to take giant leaps towards artificial general robotics.”

Robots powered by GR00T... will be designed to understand natural language and emulate movements by observing human actions—quickly learning coordination, dexterity and other skills in order to navigate, adapt and interact with the real world.

This sounds good, but that “will be” is doing a lot of heavy lifting. Like, there’s a very significant “how” missing here. More specifically, we’ll need a better understanding of what’s underlying this foundation model—is there real robot data under there somewhere, or is it based on a massive amount of simulation? Are the humanoid robotic companies involved contributing data to improve GR00T, or instead training their own models based on it? It’s certainly notable that Nvidia is name-dropping most of the heavy-hitters in commercial humanoids, including 1X Technologies, Agility Robotics, Apptronik, Boston Dynamics, Figure AI, Fourier Intelligence, Sanctuary AI, Unitree Robotics, and XPENG Robotics. We’ll be able to check in with some of those folks directly this week to hopefully learn more.

On the hardware side, Nvidia is also announcing a new computing platform called Jetson Thor:

Jetson Thor was created as a new computing platform capable of performing complex tasks and interacting safely and naturally with people and machines. It has a modular architecture optimized for performance, power and size. The SoC includes a next-generation GPU based on NVIDIA Blackwell architecture with a transformer engine delivering 800 teraflops of 8-bit floating point AI performance to run multimodal generative AI models like GR00T. With an integrated functional safety processor, a high-performance CPU cluster and 100GB of ethernet bandwidth, it significantly simplifies design and integration efforts.

Speaking of Nvidia’s Blackwell architecture—today the company also unveiled its B200 Blackwell GPU. And to round out the announcements, the chip foundry TSMC and Synopsys, an electronic design automation company, each said they will be moving Nvidia’s inverse lithography tool, cuLitho, into production.

The Open Source Robotics Alliance

The other big announcement is actually from the Open Source Robotics Foundation, which is launching the Open Source Robotics Alliance (OSRA), a “new initiative to strengthen the governance of our open-source robotics software projects and ensure the health of the Robot Operating System (ROS) Suite community for many years to come.” Nvidia is an inaugural platinum member of the OSRA, but they’re not alone—other platinum members include Intrinsic and Qualcomm. Other significant members include Apex, Clearpath Robotics, Ekumen, eProsima, PickNik, Silicon Valley Robotics, and Zettascale.

“The [Open Source Robotics Foundation] had planned to restructure its operations by broadening community participation and expanding its impact in the larger ROS ecosystem,” explains Vanessa Yamzon Orsi, CEO of the Open Source Robotics Foundation. “The sale of [Open Source Robotics Corporation] was the first step towards that vision, and the launch of the OSRA is the next big step towards that change.”

We had time for a brief Q&A with Orsi to better understand how this will affect the ROS community going forward.

You structured the OSRA to have a mixed membership and meritocratic model like the Linux Foundation—what does that mean, exactly?

Vanessa Yamzon Orsi: We have modeled the OSRA to allow for paths to participation in its activities through both paid memberships (for organizations and their representatives) and the community members who support the projects through their contributions. The mixed model enables participation in the way most appropriate for each organization or individual: contributing funding as a paying member, contributing directly to project development, or both.

What are some benefits for the ROS ecosystem that we can look forward to through OSRA?

Orsi: We expect the OSRA to benefit the OSRF’s projects in three significant ways.

  • By providing a stable stream of funding to cover the maintenance and development of the ROS ecosystem.
  • By encouraging greater community involvement in development through open processes and open, meritocratic status achievement.
  • By bringing greater community involvement in governance and ensuring that all stakeholders have a voice in decision-making.

Why will this be a good thing for ROS users?

Orsi: The OSRA will ensure that ROS and the suite of open source projects under the stewardship of Open Robotics will continue to be supported and strengthened for years to come. By providing organized governance and oversight, clearer paths to community participation, and financial support, it will provide stability and structure to the projects while enabling continued development.


Nvidia’s ongoing GTC developer conference in San Jose is, unsurprisingly, almost entirely about AI this year. But in between the AI developments, Nvidia has also made a couple of significant robotics announcements.

First, there’s Project GR00T (with each letter and number pronounced individually so as not to invoke the wrath of Disney), a foundation model for humanoid robots. And secondly, Nvidia has committed to be the founding platinum member of the Open Source Robotics Alliance, a new initiative from the Open Source Robotics Foundation intended to make sure that the Robot Operating System (ROS), a collection of open-source software libraries and tools, has the support that it needs to flourish.

GR00T

First, let’s talk about GR00T (short for “Generalist Robot 00 Technology”). The way that Nvidia presenters enunciated it letter-by-letter during their talks strongly suggests that in private they just say “Groot.” So the rest of us can also just say “Groot” as far as I’m concerned.

As a “general-purpose foundation model for humanoid robots,” GR00T is intended to provide a starting point for specific humanoid robots to do specific tasks. As you might expect from something being presented for the first time at an Nvidia keynote, it’s awfully vague at the moment, and we’ll have to get into it more later on. Here’s pretty much everything useful that Nvidia has told us so far:

“Building foundation models for general humanoid robots is one of the most exciting problems to solve in AI today,” said Jensen Huang, founder and CEO of NVIDIA. “The enabling technologies are coming together for leading roboticists around the world to take giant leaps towards artificial general robotics.”

Robots powered by GR00T... will be designed to understand natural language and emulate movements by observing human actions—quickly learning coordination, dexterity and other skills in order to navigate, adapt and interact with the real world.

This sounds good, but that “will be” is doing a lot of heavy lifting. Like, there’s a very significant “how” missing here. More specifically, we’ll need a better understanding of what’s underlying this foundation model—is there real robot data under there somewhere, or is it based on a massive amount of simulation? Are the humanoid robotic companies involved contributing data to improve GR00T, or instead training their own models based on it? It’s certainly notable that Nvidia is name-dropping most of the heavy-hitters in commercial humanoids, including 1X Technologies, Agility Robotics, Apptronik, Boston Dynamics, Figure AI, Fourier Intelligence, Sanctuary AI, Unitree Robotics, and XPENG Robotics. We’ll be able to check in with some of those folks directly this week to hopefully learn more.

On the hardware side, Nvidia is also announcing a new computing platform called Jetson Thor:

Jetson Thor was created as a new computing platform capable of performing complex tasks and interacting safely and naturally with people and machines. It has a modular architecture optimized for performance, power and size. The SoC includes a next-generation GPU based on NVIDIA Blackwell architecture with a transformer engine delivering 800 teraflops of 8-bit floating point AI performance to run multimodal generative AI models like GR00T. With an integrated functional safety processor, a high-performance CPU cluster and 100GB of ethernet bandwidth, it significantly simplifies design and integration efforts.

Speaking of Nvidia’s Blackwell architecture—today the company also unveiled its B200 Blackwell GPU. And to round out the announcements, the chip foundry TSMC and Synopsys, an electronic design automation company, each said they will be moving Nvidia’s inverse lithography tool, cuLitho, into production.

The Open Source Robotics Alliance

The other big announcement is actually from the Open Source Robotics Foundation, which is launching the Open Source Robotics Alliance (OSRA), a “new initiative to strengthen the governance of our open-source robotics software projects and ensure the health of the Robot Operating System (ROS) Suite community for many years to come.” Nvidia is an inaugural platinum member of the OSRA, but they’re not alone—other platinum members include Intrinsic and Qualcomm. Other significant members include Apex, Clearpath Robotics, Ekumen, eProsima, PickNik, Silicon Valley Robotics, and Zettascale.

“The [Open Source Robotics Foundation] had planned to restructure its operations by broadening community participation and expanding its impact in the larger ROS ecosystem,” explains Vanessa Yamzon Orsi, CEO of the Open Source Robotics Foundation. “The sale of [Open Source Robotics Corporation] was the first step towards that vision, and the launch of the OSRA is the next big step towards that change.”

We had time for a brief Q&A with Orsi to better understand how this will affect the ROS community going forward.

You structured the OSRA to have a mixed membership and meritocratic model like the Linux Foundation—what does that mean, exactly?

Vanessa Yamzon Orsi: We have modeled the OSRA to allow for paths to participation in its activities through both paid memberships (for organizations and their representatives) and the community members who support the projects through their contributions. The mixed model enables participation in the way most appropriate for each organization or individual: contributing funding as a paying member, contributing directly to project development, or both.

What are some benefits for the ROS ecosystem that we can look forward to through OSRA?

Orsi: We expect the OSRA to benefit the OSRF’s projects in three significant ways.

  • By providing a stable stream of funding to cover the maintenance and development of the ROS ecosystem.
  • By encouraging greater community involvement in development through open processes and open, meritocratic status achievement.
  • By bringing greater community involvement in governance and ensuring that all stakeholders have a voice in decision-making.

Why will this be a good thing for ROS users?

Orsi: The OSRA will ensure that ROS and the suite of open source projects under the stewardship of Open Robotics will continue to be supported and strengthened for years to come. By providing organized governance and oversight, clearer paths to community participation, and financial support, it will provide stability and structure to the projects while enabling continued development.

Along with the development of speech and language technologies, the market for speech-enabled human-robot interactions (HRI) has grown in recent years. However, it is found that people feel their conversational interactions with such robots are far from satisfactory. One of the reasons is the habitability gap, where the usability of a speech-enabled agent drops when its flexibility increases. For social robots, such flexibility is reflected in the diverse choice of robots’ appearances, sounds and behaviours, which shape a robot’s ‘affordance’. Whilst designers or users have enjoyed the freedom of constructing a social robot by integrating off-the-shelf technologies, such freedom comes at a potential cost: the users’ perceptions and satisfaction. Designing appropriate affordances is essential for the quality of HRI. It is hypothesised that a social robot with aligned affordances could create an appropriate perception of the robot and increase users’ satisfaction when speaking with it. Given that previous studies of affordance alignment mainly focus on one interface’s characteristics and face-voice match, we aim to deepen our understanding of affordance alignment with a robot’s behaviours and use cases. In particular, we investigate how a robot’s affordances affect users’ perceptions in different types of use cases. For this purpose, we conducted an exploratory experiment that included three different affordance settings (adult-like, child-like, and robot-like) and three use cases (informative, emotional, and hybrid). Participants were invited to talk to social robots in person. A mixed-methods approach was employed for quantitative and qualitative analysis of 156 interaction samples. The results show that static affordance (face and voice) has a statistically significant effect on the perceived warmth of the first impression; use cases affect people’s perceptions more on perceived competence and warmth before and after interactions. In addition, it shows the importance of aligning static affordance with behavioural affordance. General design principles of behavioural affordances are proposed. We anticipate that our empirical evidence will provide a clearer guideline for speech-enabled social robots’ affordance design. It will be a starting point for more sophisticated design guidelines. For example, personalised affordance design for individual or group users in different contexts.

Introduction: Collaborative robots, designed to work alongside humans for manipulating end-effectors, greatly benefit from the implementation of active constraints. This process comprises the definition of a boundary, followed by the enforcement of some control algorithm when the robot tooltip interacts with the generated boundary. Contact with the constraint boundary is communicated to the human operator through various potential forms of feedback. In fields like surgical robotics, where patient safety is paramount, implementing active constraints can prevent the robot from interacting with portions of the patient anatomy that shouldn’t be operated on. Despite improvements in orthopaedic surgical robots, however, there exists a gap between bulky systems with haptic feedback capabilities and miniaturised systems that only allow for boundary control, where interaction with the active constraint boundary interrupts robot functions. Generally, active constraint generation relies on optical tracking systems and preoperative imaging techniques.

Methods: This paper presents a refined version of the Signature Robot, a three degrees-of-freedom, hands-on collaborative system for orthopaedic surgery. Additionally, it presents a method for generating and enforcing active constraints “on-the-fly” using our previously introduced monocular, RGB, camera-based network, SimPS-Net. The network was deployed in real-time for the purpose of boundary definition. This boundary was subsequently used for constraint enforcement testing. The robot was utilised to test two different active constraints: a safe region and a restricted region.

Results: The network success rate, defined as the ratio of correct over total object localisation results, was calculated to be 54.7% ± 5.2%. In the safe region case, haptic feedback resisted tooltip manipulation beyond the active constraint boundary, with a mean distance from the boundary of 2.70 mm ± 0.37 mm and a mean exit duration of 0.76 s ± 0.11 s. For the restricted-zone constraint, the operator was successfully prevented from penetrating the boundary in 100% of attempts.

Discussion: This paper showcases the viability of the proposed robotic platform and presents promising results of a versatile constraint generation and enforcement pipeline.



About a year ago, Zipline introduced Platform 2, an approach to precision urban drone delivery that combines a large hovering drone with a smaller package-delivery “Droid.” Lowered on a tether from the belly of its parent Zip drone, the Droid contains thrusters and sensors (plus a 2.5- to 3.5-kilogram payload) to reliably navigate itself to a delivery area of just one meter in diameter. The Zip, meanwhile, safely remains hundreds of meters up. After depositing its payload, the Droid rises back up to the drone on its tether, and off they go.

At first glance, the sensor and thruster-packed Droid seems complicated enough to be bordering on impractical, especially when you consider the relative simplicity of other drone delivery solutions, which commonly just drop the package itself on a tether from a hovering drone. I’ve been writing about robots long enough that I’m suspicious of robotic solutions that appear to be overengineered, since that’s always a huge temptation with robotics. Like, is this really the best way of solving a problem, or is it just the coolest way?

We know the folks at Zipline pretty well, though, and they’ve certainly made creative engineering work for them, as we saw when we visited one of their “nests” in rural Rwanda. So as Zipline nears the official launch of Platform 2, we spoke with Zipline cofounder and CTO Keenan Wyrobek, Platform 2 lead Zoltan Laszlo, and industrial designer Gregoire Vandenbussche to understand exactly why they think this is the best way of solving precision urban drone delivery.

First, a quick refresher. Here’s what the delivery sequence with the vertical takeoff and landing (VTOL) Zip and the Droid looks like:

The system has a service radius of about 16 kilometers (10 miles), and it can make deliveries to outdoor spaces of “any meaningful size.” Visual sensors on the Droid find the delivery site and check for obstacles on the way down, while the thrusters compensate for wind and movement of the parent drone. Since the big VTOL Zip remains well out of the way, deliveries are fast, safe, and quiet. But it takes two robots to pull off the delivery rather than just one.

On the other end is the infrastructure required to load and charge these drones. Zipline’s Platform 1 drones require a dedicated base with relatively large launch and recovery systems. With Platform 2, the drone drops the Droid into a large chute attached to the side of a building so that the Droid can be reloaded, after which it pulls the Droid out again and flies off to make the delivery:

“We think it’s the best delivery experience. Not the best drone delivery experience, the best delivery experience,” Zipline’s Wyrobek tells us. That may be true, but the experience also has to be practical and sustainable for Zipline to be successful, so we asked the Zipline team to explain the company’s approach to precision urban delivery.

Zipline on:

IEEE Spectrum: What problems is Platform 2 solving, and why is it necessary to solve those problems in this specific way?

Keenan Wyrobek: There are literally billions of last-mile deliveries happening every year in [the United States] alone, and our customers have been asking for years for something that can deliver to their homes. With our long-range platform, Platform 1, we can float a package down into your yard on a parachute, but that takes some space. And so one half of the big design challenge was how to get our deliveries precise enough, while the other half was to develop a system that will bolt on to existing facilities, which Platform 1 doesn’t do.

Zoltan Laszlo: Platform 1 can deliver within an area of about two parking spaces. As we started to actually look at the data in urban areas using publicly available lidar surveys, we found that two parking spaces serves a bit more than half the market. We want to be a universal delivery service.

But with a delivery area of 1 meter in diameter, which is what we’re actually hitting in our delivery demonstrations for Platform 2, that gets us into the high 90s for the percentage of people that we can deliver to.

Wyrobek: When we say “urban,” what we’re talking about is three-story sprawl, which is common in many large cities around the world. And we wanted to make sure that our deliveries could be precise enough for places like that.

There are some existing solutions for precision aerial delivery that have been operating at scale with some success, typically by winching packages to the ground from a VTOL drone. Why develop your own technique rather than just going with something that has already been shown to work?

Laszlo: Winching down is the natural extension of being able to hover in place, and when we first started, we were like, “Okay, we’re just going to winch down. This will be great, super easy.”

So we went to our test site in Half Moon Bay [on the Northern California coast] and built a quick prototype of a winch system. But as soon as we lowered a box down on the winch, the wind started blowing it all over the place. And this was from the height of our lift, which is less than 10 meters up. We weren’t even able to stay inside two parking spaces, which told us that something was broken with our approach.

The aircraft can sense the wind, so we thought we’d be able to find the right angle for the delivery and things like that. But the wind where the aircraft is may be different from the wind nearer the ground. We realized that unless we’re delivering to an open field, a package that does not have active wind compensation is going to be very hard to control. We’re targeting high-90th percentile in terms of availability due to weather—even if it’s a pretty blustery day, we still want to be able to deliver.

Wyrobek: This was a wild insight when we really understood that unless it’s a perfect day, using a winch actually takes almost as much space as we use for Platform 1 floating a package down on a parachute.

Engineering test footage of Zipline’s Platform 2 docking system at their test site in Half Moon Bay in California.

How did you arrive at this particular delivery solution for Platform 2?

Laszlo: I don’t remember whose idea it was, but we were playing with a bunch of different options. Putting thrusters on the tether wasn’t even the craziest idea. We had our Platform 1 aircraft, which was reliable, so we started with looking at ways to just make that aircraft deliver more precisely. There was only so much more we could do with passive parachutes, but what does an active, steerable parachute look like? There are remote-controlled paragliding toys out there that we tested, with mixed results—the challenge is to minimize the smarts in your parachute, because there’s a chance you won’t get it back. So then we started some crazy brainstorming about how to reliably retrieve the parachute.

Wyrobek: One idea was that the parachute would come with a self-return envelope that you could stick in the mail. Another idea was that the parachute would be steered by a little drone, and when the package got dropped off, the drone would reel the parachute in and then fly back up into the Zip.

Laszlo: But when we realized that the package has to be able to steer itself, that meant the Zip doesn’t need to be active. The Zip doesn’t need to drive the package, it doesn’t even need to see the package, it just needs to be a point up in the sky that’s holding the package. That let us move from having the Zip 50 feet up, to having it 300 feet up, which is important because it’s a big, heavy drone that we don’t want in our customer’s space. And the final step was adding enough smarts to the thing coming down into your space to figure out where exactly to deliver to, and of course to handle the wind.

Once you knew what you needed to do, how did you get to the actual design of the droid?

Gregoire Vandenbussche: Zipline showed me pretty early on that they were ready to try crazy ideas, and from my experience, that’s extremely rare. When the idea of having this controllable tether with a package attached to it came up, one of my first thoughts was that from a user standpoint, nothing like this exists. And the difficulty of designing something that doesn’t exist is that people will try to identify it according to what they know. So we had to find a way to drive that thinking towards something positive.

Early Droid concept sketches by designer Gregoire Vandenbussche featured legs that would fold up after delivery.Zipline

First we thought about putting words onto it, like “hello” or something, but the reality is that we’re an international company and we need to be able to work everywhere. But there’s one thing that’s common to everyone, and that’s emotions—people are able to recognize certain things as being approachable and adorable, so going in that direction felt like the right thing to do. However, being able to design a robot that gives you that kind of emotion but also flies was quite a challenge. We took inspiration from other things that move in 3D, like sea mammals—things that people will recognize even without thinking about it.

Vandenbussche’s sketches show how the design of the Droid was partially inspired by dolphins.Zipline

Now that you say it, I can definitely see the sea mammal inspiration in the drone.

Vandenbussche: There are two aspects of sea mammals that work really well for our purpose. One of them is simplicity of shape; sea mammals don’t have all that many details. Also, they tend to be optimized for performance. Ultimately, we need that, because we need to be able to fly. And we need to be able to convey to people that the drone is under control. So having something you can tell is moving forward or turning or moving away was very helpful.

Wyrobek: One other insight that we had is that Platform 2 needs to be small to fit into tight delivery spaces, and it needs to feel small when it comes into your personal space, but it also has to be big enough inside to be a useful delivery platform. We tried to leverage the chubby but cute look that baby seals have going on.

The design journey was pretty fun. Gregoire would spend two or three days coming up with a hundred different concept sketches. We’d do a bunch of brainstorming, and then Gregoire would come up with a whole bunch of new directions, and we’d keep exploring. To be clear, no one would describe our functional prototypes from back then as “cute.” But through all this iteration eventually we ended up in an awesome place.

And how do you find that place? When do you know that your robot is just cute enough?

One iteration of the Droid, Vandenbussche determined, looked too technical and intimidating.Zipline

Vandenbussche: It’s finding the balance around what’s realistic and functional. I like to think of industrial design as taking all of the constraints and kind of playing Tetris with them until you get a result that ideally satisfies everybody. I remember at one point looking at where we were, and feeling like we were focusing too much on performance and missing that emotional level. So, we went back a little bit to say, where can we bring this back from looking like a highly technical machine to something that can give you a feeling of approachability?

Laszlo: We spent a fair bit of time on the controls and behaviors of the droid to make sure that it moves in a very approachable and predictable way, so that you know where it’s going ahead of time and it doesn’t behave in unexpected ways. That’s pretty important for how people perceive it.

We did a lot of work on how the droid would descend and approach the delivery site. One concept had the droid start to lower down well before the Zip was hovering directly overhead. We had simulations and renderings, and it looked great. We could do the whole delivery in barely over 20 seconds. But even if the package is far away from you, it still looks scary because [the Zip is] moving faster than you would expect, and you can’t tell exactly where it’s going to deliver. So we deleted all that code, and now it just comes straight down, and people don’t back away from the Droid anymore. They’re just like, “Oh, okay, cool.”

How did you design the thrusters to enable these pinpoint deliveries?

Early tests of the Droid centered around a two-fan version.Zipline

Laszlo: With the thrusters, we knew we wanted to maximize the size of at least one of the fans, because we were almost always going to have to deal with wind. We’re trying to be as quiet as we can, so the key there is to maximize the area of the propeller. Our leading early design was just a box with two fans on it:

Two fans with unobstructed flow meant that it moved great, but the challenge of fitting it inside another aircraft was going to be painful. And it looked big, even though it wasn’t actually that big.

Vandenbussche: It was also pretty intimidating when you had those two fans facing you and the Droid coming toward you.

A single steerable fan [left] that acted like a rudder was simpler in some ways, but as the fan got larger, the gyroscopic effects became hard to manage. Instead of one steerable fan, how about two steerable fans? [right] Omnidirectional motion was possible with this setup, but packaging it inside of a Zip didn’t work.Zipline

Laszlo: We then started looking at configurations with a main fan and a second smaller fan, with the bigger fan at the back pushing forward and the smaller fan at the front providing thrust for turning. The third fan we added relatively late because we didn’t want to add it at all. But we found that [with two fans] the droid would have to spin relatively quickly to align to shifting winds, whereas with a third fan we can just push sideways in the direction that we need.

What kind of intelligence does the Droid have?

The current design of Zipline’s Platform 2 Droid is built around a large thruster in the rear and two smaller thrusters at the front and back.Zipline

Wyrobek: The Droid has its own little autopilot, and there’s a very simple communications system between the two vehicles. You may think that it’s a really complex coordinated control problem, but it’s not: The Zip just kind of hangs out, and the Droid takes care of the delivery. The sensing challenge is for the Droid to find trees and powerlines and things like that, and then find a good delivery site.

Was there ever a point at which you were concerned that the size and weight and complexity would not be worth it?

Wyrobek: Our mindset was to fail fast, to try things and do what we needed to do to convince ourselves that it wasn’t a good path. What’s fun about this kind of iterative process is oftentimes, you try things and you realize that actually, this is better than we thought.

Laszlo: We first thought about the Droid as a little bit of a tax, in that it’s costing us extra weight. But if your main drone can stay high enough up that it avoids trees and buildings, then it can just float around up there. If it gets pushed around by the wind, it doesn’t matter because the Droid can compensate.

Wyrobek: Keeping the Zip at altitude is a big win in many ways. It doesn’t have to spend energy station-keeping, descending, and then ascending again. We just do that with the much smaller Droid, which also makes the hovering phase much shorter. It’s also much more efficient to control the small droid than the large Zip. And having all of the sensors on the Droid very close to the area that you’re delivering to makes that problem easier as well. It may look like a more complex system from the outside, but from the inside, it’s basically making all the hardest problems much easier.

Over the past year, Zipline has set up a bunch of partnerships to make residential deliveries to consumers using Droid starting in 2024, including prescriptions from Cleveland Clinic in Ohio, medical products from WellSpan Health in Pennsylvania, tasty food from Mendocino Farms in California, and a little bit of everything from Walmart starting in Dallas. Zipline’s plan is to kick things off with Platform 2 later this year.

Pages