Feed aggregator

Introduction: Humans and robots are increasingly collaborating on complex tasks such as firefighting. As robots are becoming more autonomous, collaboration in human-robot teams should be combined with meaningful human control. Variable autonomy approaches can ensure meaningful human control over robots by satisfying accountability, responsibility, and transparency. To verify whether variable autonomy approaches truly ensure meaningful human control, the concept should be operationalized to allow its measurement. So far, designers of variable autonomy approaches lack metrics to systematically address meaningful human control.

Methods: Therefore, this qualitative focus group (n = 5 experts) explored quantitative operationalizations of meaningful human control during dynamic task allocation using variable autonomy in human-robot teams for firefighting. This variable autonomy approach requires dynamic allocation of moral decisions to humans and non-moral decisions to robots, using robot identification of moral sensitivity. We analyzed the data of the focus group using reflexive thematic analysis.

Results: Results highlight the usefulness of quantifying the traceability requirement of meaningful human control, and how situation awareness and performance can be used to objectively measure aspects of the traceability requirement. Moreover, results emphasize that team and robot outcomes can be used to verify meaningful human control but that identifying reasons underlying these outcomes determines the level of meaningful human control.

Discussion: Based on our results, we propose an evaluation method that can verify if dynamic task allocation using variable autonomy in human-robot teams for firefighting ensures meaningful human control over the robot. This method involves subjectively and objectively quantifying traceability using human responses during and after simulations of the collaboration. In addition, the method involves semi-structured interviews after the simulation to identify reasons underlying outcomes and suggestions to improve the variable autonomy approach.

To effectively control a robot’s motion, it is common to employ a simplified model that approximates the robot’s dynamics. Nevertheless, discrepancies between the actual mechanical properties of the robot and the simplified model can result in motion failures. To address this issue, this study introduces a pneumatic-driven bipedal musculoskeletal robot designed to closely match the mechanical characteristics of a simplified spring-loaded inverted pendulum (SLIP) model. The SLIP model is widely utilized in robotics due to its passive stability and dynamic properties resembling human walking patterns. A musculoskeletal bipedal robot was designed and manufactured to concentrate its center of mass within a compact body around the hip joint, featuring low leg inertia in accordance with SLIP model principles. Furthermore, we validated that the robot exhibits similar dynamic characteristics to the SLIP model through a sequential jumping experiment and by comparing its performance to SLIP model simulation.

Deep generative models (DGM) are increasingly employed in emergent communication systems. However, their application in multimodal data contexts is limited. This study proposes a novel model that combines multimodal DGM with the Metropolis-Hastings (MH) naming game, enabling two agents to focus jointly on a shared subject and develop common vocabularies. The model proves that it can handle multimodal data, even in cases of missing modalities. Integrating the MH naming game with multimodal variational autoencoders (VAE) allows agents to form perceptual categories and exchange signs within multimodal contexts. Moreover, fine-tuning the weight ratio to favor a modality that the model could learn and categorize more readily improved communication. Our evaluation of three multimodal approaches - mixture-of-experts (MoE), product-of-experts (PoE), and mixture-of-product-of-experts (MoPoE)–suggests an impact on the creation of latent spaces, the internal representations of agents. Our results from experiments with the MNIST + SVHN and Multimodal165 datasets indicate that combining the Gaussian mixture model (GMM), PoE multimodal VAE, and MH naming game substantially improved information sharing, knowledge formation, and data reconstruction.

Exoskeletons that assist in ankle plantarflexion can improve energy economy in locomotion. Characterizing the joint-level mechanisms behind these reductions in energy cost can lead to a better understanding of how people interact with these devices, as well as to improved device design and training protocols. We examined the biomechanical responses to exoskeleton assistance in exoskeleton users trained with a lengthened protocol. Kinematics at unassisted joints were generally unchanged by assistance, which has been observed in other ankle exoskeleton studies. Peak plantarflexion angle increased with plantarflexion assistance, which led to increased total and biological mechanical power despite decreases in biological joint torque and whole-body net metabolic energy cost. Ankle plantarflexor activity also decreased with assistance. Muscles that act about unassisted joints also increased activity for large levels of assistance, and this response should be investigated over long-term use to prevent overuse injuries.

Introduction: Patients who are hospitalized may be at a higher risk for falling, which can result in additional injuries, longer hospitalizations, and extra cost for healthcare organizations. A frequent context for these falls is when a hospitalized patient needs to use the bathroom. While it is possible that “high-tech” tools like robots and AI applications can help, adopting a human-centered approach and engaging users and other affected stakeholders in the design process can help to maximize benefits and avoid unintended consequences.

Methods: Here, we detail our findings from a human-centered design research effort to investigate how the process of toileting a patient can be ameliorated through the application of advanced tools like robots and AI. We engaged healthcare professionals in interviews, focus groups, and a co-creation session in order to recognize common barriers in the toileting process and find opportunities for improvement.

Results: In our conversations with participants, who were primarily nurses, we learned that toileting is more than a nuisance for technology to remove through automation. Nurses seem keenly aware and responsive to the physical and emotional pains experienced by patients during the toileting process, and did not see technology as a feasible or welcomed substitute. Instead, nurses wanted tools which supported them in providing this care to their patients. Participants envisioned tools which helped them anticipate and understand patient toileting assistance needs so they could plan to assist at convenient times during their existing workflows. Participants also expressed favorability towards mechanical assistive features which were incorporated into existing equipment to ensure ubiquitous availability when needed without adding additional mass to an already cramped and awkward environment.

Discussion: We discovered that the act of toileting served more than one function, and can be viewed as a valuable touchpoint in which nurses can assess, support, and encourage their patients to engage in their own recovery process as they perform a necessary and normal function of life. While we found opportunities for technology to make the process safer and less burdensome for patients and clinical staff alike, we believe that designers should preserve and enhance the therapeutic elements of the nurse-patient interaction rather than eliminate it through automation.

Introduction: Communication from automated vehicles (AVs) to pedestrians using augmented reality (AR) could positively contribute to traffic safety. However, previous AR research for pedestrians was mainly conducted through online questionnaires or experiments in virtual environments instead of real ones.

Methods: In this study, 28 participants conducted trials outdoors with an approaching AV and were supported by four different AR interfaces. The AR experience was created by having participants wear a Varjo XR-3 headset with see-through functionality, with the AV and AR elements virtually overlaid onto the real environment. The AR interfaces were vehicle-locked (Planes on vehicle), world-locked (Fixed pedestrian lights, Virtual fence), or head-locked (Pedestrian lights HUD). Participants had to hold down a button when they felt it was safe to cross, and their opinions were obtained through rating scales, interviews, and a questionnaire.

Results: The results showed that participants had a subjective preference for AR interfaces over no AR interface. Furthermore, the Pedestrian lights HUD was more effective than no AR interface in a statistically significant manner, as it led to participants more frequently keeping the button pressed. The Fixed pedestrian lights scored lower than the other interfaces, presumably due to low saliency and the fact that participants had to visually identify both this AR interface and the AV.

Discussion: In conclusion, while users favour AR in AV-pedestrian interactions over no AR, its effectiveness depends on design factors like location, visibility, and visual attention demands. In conclusion, this work provides important insights into the use of AR outdoors. The findings illustrate that, in these circumstances, a clear and easily interpretable AR interface is of key importance.



Citing “no path to regulatory approval in the European Union,” Amazon and iRobot have announced the termination of an acquisition deal first announced in August of 2022 that would have made iRobot a part of Amazon and valued the robotics company at US $1.4 billion.

The European Commission released a statement today that explained some of its concerns, which to be fair, seem like reasonable things to be concerned about:

Our in-depth investigation preliminarily showed that the acquisition of iRobot would have enabled Amazon to foreclose iRobot’s rivals by restricting or degrading access to the Amazon Stores.… We also preliminarily found that Amazon would have had the incentive to foreclose iRobot’s rivals because it would have been economically profitable to do so. All such foreclosure strategies could have restricted competition in the market for robot vacuum cleaners, leading to higher prices, lower quality, and less innovation for consumers.

Amazon, for its part, characterizes this as “undue and disproportionate regulatory hurdles.” Whoever you believe is correct, the protracted strangulation of this acquisition deal has not been great for iRobot, and its termination is potentially disastrous—Amazon will have to pay iRobot a $94 million termination fee, which is basically nothing for it, and meanwhile iRobot is already laying off 350 people, or 31 percent of its head count.

From one of iRobot’s press releases:

“iRobot is an innovation pioneer with a clear vision to make consumer robots a reality,” said Colin Angle, Founder of iRobot. “The termination of the agreement with Amazon is disappointing, but iRobot now turns toward the future with a focus and commitment to continue building thoughtful robots and intelligent home innovations that make life better, and that our customers around the world love.”

The reason that I don’t feel much better after reading that statement is that Colin Angle has already stepped down as chairman and CEO of iRobot. Angle was one of the founders of iRobot (along with Rodney Brooks and Helen Greiner) and has stuck with the company for its entire 30+ year existence, until just now. So, that’s not great. Also, I’m honestly not sure how iRobot is going to create much in the way of home innovations since the press release states that the company is “pausing all work related to non-floor care innovations, including air purification, robotic lawn mowing and education,” while also “reducing R&D expense by approximately $20 million year-over-year.”

iRobot’s lawn mower has been paused for a while now, so it’s not a huge surprise that nothing will move forward there, but a pause on the education robots like Create and Root is a real blow to the robotics community. And even if iRobot is focusing on floor-care innovations, I’m not sure how much innovation will be possible with a slashed R&D budget amidst huge layoffs.

Sigh.

On LinkedIn, Colin Angle wrote a little bit about what he called “the magic of iRobot”:

iRobot built the first micro rovers and changed space exploration forever. iRobot built the first practical robots that left the research lab and went on combat missions to defuse bombs, saving 1000’s of lives. iRobot’s robots crucially enabled the cold shutdown of the reactors at Fukushima, found the underwater pools of oil in the aftermath of the deep horizon oil rig disaster in the Gulf of Mexico. And pioneered an industry with Roomba, fulfilling the unfulfilled promise of over 50 years for practical robots in the home.

Why?

As I think about all the events surrounding those actions, there is a common thread. We believed we could. And we decided to try with a spirit of pragmatic optimism. Building robots means knowing failure. It does not treat blind hope kindly. Robots are too complex. Robots are too expensive. Robots are too challenging for hope alone to have the slightest chance of success. But combining the belief that a problem can be solved with a commitment to the work to solve it enabled us to change the world.

And that’s what I personally find so worrying about all of this. iRobot has a treasured history of innovation which is full of successes and failures and really weird stuff, and it’s hard to see how that will be able to effectively continue. Here are a couple of my favorite weird iRobot things, including a PackBot that flies (for a little bit) and a morphing blobular robot:

I suppose it’s worth pointing out that the weirdest stuff (like in the videos above) is all over a decade old, and you can reasonably ask whether iRobot was that kind of company anymore even before this whole Amazon thing happened. The answer is probably not, since the company has chosen to focus almost exclusively on floor-care robots. But even there we’ve seen consistent innovation in hardware and software that pretty much every floor-care robot company seems to then pick up on about a year later. This is not to say that other floor-care robots can’t innovate, but it’s undeniable that iRobot has been a driving force behind that industry. Will that continue? I really hope so.



Citing “no path to regulatory approval in the European Union,” Amazon and iRobot have announced the termination of an acquisition deal first announced in August of 2022 that would have made iRobot a part of Amazon and valued the robotics company at US $1.4 billion.

The European Commission released a statement today that explained some of its concerns, which to be fair, seem like reasonable things to be concerned about:

Our in-depth investigation preliminarily showed that the acquisition of iRobot would have enabled Amazon to foreclose iRobot’s rivals by restricting or degrading access to the Amazon Stores.… We also preliminarily found that Amazon would have had the incentive to foreclose iRobot’s rivals because it would have been economically profitable to do so. All such foreclosure strategies could have restricted competition in the market for robot vacuum cleaners, leading to higher prices, lower quality, and less innovation for consumers.

Amazon, for its part, characterizes this as “undue and disproportionate regulatory hurdles.” Whoever you believe is correct, the protracted strangulation of this acquisition deal has not been great for iRobot, and its termination is potentially disastrous—Amazon will have to pay iRobot a $94 million termination fee, which is basically nothing for it, and meanwhile iRobot is already laying off 350 people, or 31 percent of its head count.

From one of iRobot’s press releases:

“iRobot is an innovation pioneer with a clear vision to make consumer robots a reality,” said Colin Angle, Founder of iRobot. “The termination of the agreement with Amazon is disappointing, but iRobot now turns toward the future with a focus and commitment to continue building thoughtful robots and intelligent home innovations that make life better, and that our customers around the world love.”

The reason that I don’t feel much better after reading that statement is that Colin Angle has already stepped down as chairman and CEO of iRobot. Angle was one of the founders of iRobot (along with Rodney Brooks and Helen Greiner) and has stuck with the company for its entire 30+ year existence, until just now. So, that’s not great. Also, I’m honestly not sure how iRobot is going to create much in the way of home innovations since the press release states that the company is “pausing all work related to non-floor care innovations, including air purification, robotic lawn mowing and education,” while also “reducing R&D expense by approximately $20 million year-over-year.”

iRobot’s lawn mower has been paused for a while now, so it’s not a huge surprise that nothing will move forward there, but a pause on the education robots like Create and Root is a real blow to the robotics community. And even if iRobot is focusing on floor-care innovations, I’m not sure how much innovation will be possible with a slashed R&D budget amidst huge layoffs.

Sigh.

On LinkedIn, Colin Angle wrote a little bit about what he called “the magic of iRobot”:

iRobot built the first micro rovers and changed space exploration forever. iRobot built the first practical robots that left the research lab and went on combat missions to defuse bombs, saving 1000’s of lives. iRobot’s robots crucially enabled the cold shutdown of the reactors at Fukushima, found the underwater pools of oil in the aftermath of the deep horizon oil rig disaster in the Gulf of Mexico. And pioneered an industry with Roomba, fulfilling the unfulfilled promise of over 50 years for practical robots in the home.

Why?

As I think about all the events surrounding those actions, there is a common thread. We believed we could. And we decided to try with a spirit of pragmatic optimism. Building robots means knowing failure. It does not treat blind hope kindly. Robots are too complex. Robots are too expensive. Robots are too challenging for hope alone to have the slightest chance of success. But combining the belief that a problem can be solved with a commitment to the work to solve it enabled us to change the world.

And that’s what I personally find so worrying about all of this. iRobot has a treasured history of innovation which is full of successes and failures and really weird stuff, and it’s hard to see how that will be able to effectively continue. Here are a couple of my favorite weird iRobot things, including a PackBot that flies (for a little bit) and a morphing blobular robot:

I suppose it’s worth pointing out that the weirdest stuff (like in the videos above) is all over a decade old, and you can reasonably ask whether iRobot was that kind of company anymore even before this whole Amazon thing happened. The answer is probably not, since the company has chosen to focus almost exclusively on floor-care robots. But even there we’ve seen consistent innovation in hardware and software that pretty much every floor-care robot company seems to then pick up on about a year later. This is not to say that other floor-care robots can’t innovate, but it’s undeniable that iRobot has been a driving force behind that industry. Will that continue? I really hope so.

Background: Robots are increasingly used as interaction partners with humans. Social robots are designed to follow expected behavioral norms when engaging with humans and are available with different voices and even accents. Some studies suggest that people prefer robots to speak in the user’s dialect, while others indicate a preference for different dialects.

Methods: Our study examined the impact of the Berlin dialect on perceived trustworthiness and competence of a robot. One hundred and twenty German native speakers (Mage = 32 years, SD = 12 years) watched an online video featuring a NAO robot speaking either in the Berlin dialect or standard German and assessed its trustworthiness and competence.

Results: We found a positive relationship between participants’ self-reported Berlin dialect proficiency and trustworthiness in the dialect-speaking robot. Only when controlled for demographic factors, there was a positive association between participants’ dialect proficiency, dialect performance and their assessment of robot’s competence for the standard German-speaking robot. Participants’ age, gender, length of residency in Berlin, and device used to respond also influenced assessments. Finally, the robot’s competence positively predicted its trustworthiness.

Discussion: Our results inform the design of social robots and emphasize the importance of device control in online experiments.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

Cybathlon Challenges: 2 February 2024, ZURICHEurobot Open 2024: 8–11 May 2024, LA ROCHE-SUR-YON, FRANCEICRA 2024: 13–17 May 2024, YOKOHAMA, JAPANRoboCup 2024: 17–22 July 2024, EINDHOVEN, NETHERLANDS

Enjoy today’s videos!

Made from beautifully fabricated steel and eight mobile arms, medusai can play percussion and strings with human musicians, dance with human dancers, and move in time to multiple human observers. It uses AI-driven computer vision to know what human observers are doing and responds accordingly through snake gestures, music, and light.

If this seems a little bit unsettling, that’s intentional! The project was designed to explore the concepts of trust and risk in the context of robots, and of using technology to influence emotion.

[ medusai ] via [ Georgia Tech ]

Thanks, Gil!

On 19 April 2021, NASA’s Ingenuity Mars Helicopter made history when it completed the first powered, controlled flight on the Red Planet. It flew for the last time on 18 January 2024.

[ NASA JPL ]

Teleoperation plays a crucial role in enabling robot operations in challenging environments, yet existing limitations in effectiveness and accuracy necessitate the development of innovative strategies for improving teleoperated tasks. The work illustrated in this video introduces a novel approach that utilizes mixed reality and assistive autonomy to enhance the efficiency and precision of humanoid robot teleoperation.

Sometimes all it takes is one good punch, and then you can just collapse.

[ Paper ] via [ IHMC ]

Thanks, Robert!

The new Dusty Robotics FieldPrinter 2 enhances on-site performance and productivity through its compact design and extended capabilities. Building upon the success of the first-generation FieldPrinter, which has printed over 91 million square feet of layout, the FieldPrint Platform incorporates lessons learned from years of experience in the field to deliver an optimized experience for all trades on site.

[ Dusty Robotics ]

Quadrupedal robots have emerged as a cutting-edge platform for assisting humans, finding applications in tasks related to inspection and exploration in remote areas. Nevertheless, their floating base structure renders them susceptible to failure in cluttered environments, where manual recovery by a human operator may not always be feasible. In this study, we propose a robust all-terrain recovery policy to facilitate rapid and secure recovery in cluttered environments.

[ DreamRiser ]

The work that Henry Evans is doing with Stretch (along with Hello Robot and Maya Cakmak’s lab at the University of Washington) will be presented at Humanoids this spring.

[ UW HCRL ]

Thanks, Stefan!

I like to imagine that these are just excerpts from one very long walk that Digit took around San Francisco.

[ Hybrid Robotics Lab ]

Boxing, drumming, stacking boxes, and various other practices...those are the daily teleoperation testing of our humanoid robot. Collaborating with engineers, our humanoid robots collect real-world data from teleoperation for learning to iterate control algorithms.

[ LimX Dynamics ]

The OpenDR project aims to develop a versatile and open tool kit for fundamental robot functions, using deep learning to enhance their understanding and decision-making abilities. The primary objective is to make robots more intelligent, particularly in critical areas like health care, agriculture, and production. In the health care setting, the TIAGo robot is deployed to offer assistance and support within a health care facility.

[ OpenDR ] via [ PAL Robotics ]

[ ARCHES ]

Christoph Bartneck gives a talk entitled “Social robots: The end of the beginning or the beginning of the end?”

[ Christoph Bartneck ]

Professor Michael Jordan offers his provocative thoughts on the blending of AI and economics and takes us on a tour of Trieste, a beautiful and grand city in northern Italy.

[ Berkeley ]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

Cybathlon Challenges: 2 February 2024, ZURICHEurobot Open 2024: 8–11 May 2024, LA ROCHE-SUR-YON, FRANCEICRA 2024: 13–17 May 2024, YOKOHAMA, JAPANRoboCup 2024: 17–22 July 2024, EINDHOVEN, NETHERLANDS

Enjoy today’s videos!

Made from beautifully fabricated steel and eight mobile arms, medusai can play percussion and strings with human musicians, dance with human dancers, and move in time to multiple human observers. It uses AI-driven computer vision to know what human observers are doing and responds accordingly through snake gestures, music, and light.

If this seems a little bit unsettling, that’s intentional! The project was designed to explore the concepts of trust and risk in the context of robots, and of using technology to influence emotion.

[ medusai ] via [ Georgia Tech ]

Thanks, Gil!

On 19 April 2021, NASA’s Ingenuity Mars Helicopter made history when it completed the first powered, controlled flight on the Red Planet. It flew for the last time on 18 January 2024.

[ NASA JPL ]

Teleoperation plays a crucial role in enabling robot operations in challenging environments, yet existing limitations in effectiveness and accuracy necessitate the development of innovative strategies for improving teleoperated tasks. The work illustrated in this video introduces a novel approach that utilizes mixed reality and assistive autonomy to enhance the efficiency and precision of humanoid robot teleoperation.

Sometimes all it takes is one good punch, and then you can just collapse.

[ Paper ] via [ IHMC ]

Thanks, Robert!

The new Dusty Robotics FieldPrinter 2 enhances on-site performance and productivity through its compact design and extended capabilities. Building upon the success of the first-generation FieldPrinter, which has printed over 91 million square feet of layout, the FieldPrint Platform incorporates lessons learned from years of experience in the field to deliver an optimized experience for all trades on site.

[ Dusty Robotics ]

Quadrupedal robots have emerged as a cutting-edge platform for assisting humans, finding applications in tasks related to inspection and exploration in remote areas. Nevertheless, their floating base structure renders them susceptible to failure in cluttered environments, where manual recovery by a human operator may not always be feasible. In this study, we propose a robust all-terrain recovery policy to facilitate rapid and secure recovery in cluttered environments.

[ DreamRiser ]

The work that Henry Evans is doing with Stretch (along with Hello Robot and Maya Cakmak’s lab at the University of Washington) will be presented at Humanoids this spring.

[ UW HCRL ]

Thanks, Stefan!

I like to imagine that these are just excerpts from one very long walk that Digit took around San Francisco.

[ Hybrid Robotics Lab ]

Boxing, drumming, stacking boxes, and various other practices...those are the daily teleoperation testing of our humanoid robot. Collaborating with engineers, our humanoid robots collect real-world data from teleoperation for learning to iterate control algorithms.

[ LimX Dynamics ]

The OpenDR project aims to develop a versatile and open tool kit for fundamental robot functions, using deep learning to enhance their understanding and decision-making abilities. The primary objective is to make robots more intelligent, particularly in critical areas like health care, agriculture, and production. In the health care setting, the TIAGo robot is deployed to offer assistance and support within a health care facility.

[ OpenDR ] via [ PAL Robotics ]

[ ARCHES ]

Christoph Bartneck gives a talk entitled “Social robots: The end of the beginning or the beginning of the end?”

[ Christoph Bartneck ]

Professor Michael Jordan offers his provocative thoughts on the blending of AI and economics and takes us on a tour of Trieste, a beautiful and grand city in northern Italy.

[ Berkeley ]



The Ingenuity Mars Helicopter made its 72nd and final flight on 18 January. “While the helicopter remains upright and in communication with ground controllers,” NASA’s Jet Propulsion Lab said in a press release this afternoon, “imagery of its Jan. 18 flight sent to Earth this week indicates one or more of its rotor blades sustained damage during landing, and it is no longer capable of flight.” That’s what you’re seeing in the picture above: the shadow of a broken tip of one of the helicopter’s four two-foot long carbon fiber rotor blades. NASA is assuming that at least one blade struck the Martian surface during a “rough landing,” and this is not the kind of damage that will allow the helicopter to get back into the air. Ingenuity’s mission is over.


The Perseverance rover took this picture of Ingenuity on on Aug. 2, 2023, just before flight 54.NASA/JPL-Caltech/ASU/MSSS

NASA held a press conference earlier this evening to give as much information as they can about exactly what happened to Ingenuity, and what comes next. First, here’s a summary from the press release:

Ingenuity’s team planned for the helicopter to make a short vertical flight on Jan. 18 to determine its location after executing an emergency landing on its previous flight. Data shows that, as planned, the helicopter achieved a maximum altitude of 40 feet (12 meters) and hovered for 4.5 seconds before starting its descent at a velocity of 3.3 feet per second (1 meter per second).

However, about 3 feet (1 meter) above the surface, Ingenuity lost contact with the rover, which serves as a communications relay for the rotorcraft. The following day, communications were reestablished and more information about the flight was relayed to ground controllers at NASA JPL. Imagery revealing damage to the rotor blade arrived several days later. The cause of the communications dropout and the helicopter’s orientation at time of touchdown are still being investigated.

While NASA doesn’t know for sure what happened, they do have some ideas based on the cause of the emergency landing during the previous flight, Flight 71. “[This location] is some of the hardest terrain we’ve ever had to navigate over,” said Teddy Tzanetos, Ingenuity Project Manager at NASA JPL, during the NASA press conference. “It’s very featureless—bland, sandy terrain. And that’s why we believe that during Flight 71, we had an emergency landing. She was flying over the surface and was realizing that there weren’t too many rocks to look at or features to navigate from, and that’s why Ingenuity called an emergency landing on her own.”

Ingenuity uses a downward-pointing VGA camera running at 30hz for monocular feature tracking, and compares the apparent motion of distinct features between frames to determine its motion over the ground. This optical flow technique is used for drones (and other robots) on Earth too, and it’s very reliable, as long as you have enough features to track. Where it starts to go wrong is when your camera is looking at things that are featureless, which is why consumer drones will sometimes warn you about unexpected behavior when flying over water, and why robotics labs often have bizarre carpets and wallpaper: the more features, the better. On Mars, Ingenuity has been reliably navigating by looking for distinctive features like rocks, but flying over a featureless expanse of sand caused serious problems, as Ingenuity’s Chief Pilot Emeritus Håvard Grip explained to us during today’s press conference:

The way a system like this works is by looking at the consensus of [the features] it sees, and then throwing out the things that don’t really agree with the consensus. The danger is when you run out of features, when you don’t have very many features to navigate on, and you’re not really able to establish what that consensus is and you end up tracking the wrong kinds of features, and that’s when things can get off track.

This view from Ingenuity’s navigation camera during flight 70 (on December 22) shows areas of nearly featureless terrain that would cause problems during flights 71 and 72.NASA/JPL-Caltech

After the Flight 71 emergency landing, the team decided to try a “pop-up” flight next: it was supposed to be about 30 seconds in the air, just straight up to 12 meters and then straight down as a check-out of the helicopter’s systems. As Ingenuity was descending, just before landing, there was a loss of communications with the helicopter. “We have reason to believe that it was facing the same featureless sandy terrain challenges [as in the previous flight],” said Tzanetos. “And because of the navigation challenges, we had a rotor strike with the surface that would have resulted in a power brownout which caused the communications loss.” Grip describes what he thinks happened in more detail:

Some of this is speculation because of the sparse telemetry that we have, but what we see in the telemetry is that coming down towards the last part of the flight, on the sand, when we’re closing in on the ground, the helicopter relatively quickly starts to think that it’s moving horizontally away from the landing target. It’s likely that it made an aggressive maneuver to try to correct that right upon landing. And that would have accounted for a sideways motion and tilt of the helicopter that could have led to either striking the blade to the ground and then losing power, or making a maneuver that was aggressive enough to lose power before touching down and striking the blade, we don’t know those details yet. We may never know. But we’re trying as hard as we can with the data that we have to figure out those details.

When the Ingenuity team tried reestablishing contact with the helicopter the next sol, “she was right there where we expected her to be,” Tzanetos said. “Solar panel currents were looking good, which indicated that she was upright.” In fact, everything was “green across the board.” That is, until the team started looking through the images from Ingenuity’s navigation camera, and spotted the shadow of the damaged lower blade. Even if that’s the only damage to Ingenuity, the whole rotor system is now both unbalanced and producing substantially less lift, and further flights will be impossible.

A closeup of the shadow of the damaged blade tip.NASA/JPL-Caltech

There’s always that piece in the back of your head that’s getting ready every downlink—today could be the last day, today could be the last day. So there was an initial moment, obviously, of sadness, seeing that photo come down and pop on screen, which gives us certainty of what occurred. But that’s very quickly replaced with happiness and pride and a feeling of celebration for what we’ve pulled off. Um, it’s really remarkable the journey that she’s been on and worth celebrating every single one of those sols. Around 9pm tonight Pacific time will mark 1000 sols that Ingenuity has been on the surface since her deployment from the Perseverance rover. So she picked a very fitting time to come to the end of her mission. —Teddy Tzanetos

The Ingenuity team is guessing that there’s damage to more than one of the helicopter’s blades; the blades spin fast enough that if one hit the surface, others likely did too. The plan is to attempt to slowly spin the blades to bring others into view to try and collect more information. It sounds unlikely that NASA will divert the Perseverance rover to give Ingenuity a closer look; while continuing on its sincere mission the rover will come between 200 and 300 meters of Ingenuity and will try to take some pictures, but that’s likely too far away for a good quality image.

Perseverance watches Ingenuity take off on flight 47 on March 14, 2023.NASA/JPL-Caltech/ASU/MSSS

As a tech demo, Ingenuity’s entire reason for existence was to push the boundaries of what’s possible. And as Grip explains, even in its last flight, the little helicopter was doing exactly that, going above and beyond and trying newer and riskier things until it got as far as it possibly could:

Overall, the way that Ingenuity has navigated using features of terrain has been incredibly successful. We didn’t design this system to handle this kind of terrain, but nonetheless it’s sort of been invincible until this moment where we flew in this completely bland terrain where you just have nothing to really hold on to. So there are some lessons in that for us: we now know that that particular kind of terrain can be a trap for a system like this. Backing up when encountering this featureless terrain is a functionality that a future helicopter could be equipped with. And then there are solutions like having a higher resolution camera, which would have likely helped mitigate this situation. But it’s all part of this tech demo, where we equipped this helicopter to do at most five flights in a pre-scouted area and it’s gone on to do so much more than that. And we just worked it all the way up to the line, and then just tipped it right over the line to where it couldn’t handle it anymore.

Arguably, Ingenuity’s most important contribution has been showing that it’s not just possible, but practical and valuable to have rotorcraft on Mars. “I don’t think we’d be talking about sample recovery helicopters if Ingenuity didn’t fly, period, and if it hadn’t survived for as long as it has,” Teddy Tzanetos told us after Ingenuity’s 50th flight. And it’s not just the sample return mission: JPL is also developing a much larger Mars Science Helicopter, which will owe its existence to Ingenuity’s success.

Nearly three years on Mars. 128 minutes and 11 miles of flight in the Martian skies. “I look forward to the day that one of our astronauts brings home Ingenuity and we can all visit it in the Smithsonian,” said Director of JPL Laurie Leshin at the end of today’s press conference.

I’ll be first in line.

We’ve written extensively about Ingenuity, including in-depth interviews with both helicopter and rover team members, and they’re well worth re-reading today. Thanks, Ingenuity. You did well.


What Flight 50 Means for the Ingenuity Mars Helicopter

Team lead Teddy Tzanetos on the helicopter’s milestone aerial mission


Mars Helicopter Is Much More Than a Tech Demo

A Mars rover driver explains just how much of a difference the little helicopter scout is making to Mars exploration


Ingenuity’s Chief Pilot Explains How to Fly a Helicopter on Mars

Simulation is the secret to flying a helicopter on Mars


How NASA Designed a Helicopter That Could Fly Autonomously on Mars

The Perseverance rover’s Mars Helicopter (Ingenuity) will take off, navigate, and land on Mars without human intervention



The Ingenuity Mars Helicopter made its 72nd and final flight on 18 January. “While the helicopter remains upright and in communication with ground controllers,” NASA’s Jet Propulsion Lab said in a press release this afternoon, “imagery of its Jan. 18 flight sent to Earth this week indicates one or more of its rotor blades sustained damage during landing, and it is no longer capable of flight.” That’s what you’re seeing in the picture above: the shadow of a broken tip of one of the helicopter’s four two-foot long carbon fiber rotor blades. NASA is assuming that at least one blade struck the Martian surface during a “rough landing,” and this is not the kind of damage that will allow the helicopter to get back into the air. Ingenuity’s mission is over.


The Perseverance rover took this picture of Ingenuity on on Aug. 2, 2023, just before flight 54.NASA/JPL-Caltech/ASU/MSSS

NASA held a press conference earlier this evening to give as much information as they can about exactly what happened to Ingenuity, and what comes next. First, here’s a summary from the press release:

Ingenuity’s team planned for the helicopter to make a short vertical flight on Jan. 18 to determine its location after executing an emergency landing on its previous flight. Data shows that, as planned, the helicopter achieved a maximum altitude of 40 feet (12 meters) and hovered for 4.5 seconds before starting its descent at a velocity of 3.3 feet per second (1 meter per second).

However, about 3 feet (1 meter) above the surface, Ingenuity lost contact with the rover, which serves as a communications relay for the rotorcraft. The following day, communications were reestablished and more information about the flight was relayed to ground controllers at NASA JPL. Imagery revealing damage to the rotor blade arrived several days later. The cause of the communications dropout and the helicopter’s orientation at time of touchdown are still being investigated.

While NASA doesn’t know for sure what happened, they do have some ideas based on the cause of the emergency landing during the previous flight, Flight 71. “[This location] is some of the hardest terrain we’ve ever had to navigate over,” said Teddy Tzanetos, Ingenuity Project Manager at NASA JPL, during the NASA press conference. “It’s very featureless—bland, sandy terrain. And that’s why we believe that during Flight 71, we had an emergency landing. She was flying over the surface and was realizing that there weren’t too many rocks to look at or features to navigate from, and that’s why Ingenuity called an emergency landing on her own.”

Ingenuity uses a downward-pointing VGA camera running at 30hz for monocular feature tracking, and compares the apparent motion of distinct features between frames to determine its motion over the ground. This optical flow technique is used for drones (and other robots) on Earth too, and it’s very reliable, as long as you have enough features to track. Where it starts to go wrong is when your camera is looking at things that are featureless, which is why consumer drones will sometimes warn you about unexpected behavior when flying over water, and why robotics labs often have bizarre carpets and wallpaper: the more features, the better. On Mars, Ingenuity has been reliably navigating by looking for distinctive features like rocks, but flying over a featureless expanse of sand caused serious problems, as Ingenuity’s Chief Pilot Emeritus Håvard Grip explained to us during today’s press conference:

The way a system like this works is by looking at the consensus of [the features] it sees, and then throwing out the things that don’t really agree with the consensus. The danger is when you run out of features, when you don’t have very many features to navigate on, and you’re not really able to establish what that consensus is and you end up tracking the wrong kinds of features, and that’s when things can get off track.

This view from Ingenuity’s navigation camera during flight 70 (on December 22) shows areas of nearly featureless terrain that would cause problems during flights 71 and 72.NASA/JPL-Caltech

After the Flight 71 emergency landing, the team decided to try a “pop-up” flight next: it was supposed to be about 30 seconds in the air, just straight up to 12 meters and then straight down as a check-out of the helicopter’s systems. As Ingenuity was descending, just before landing, there was a loss of communications with the helicopter. “We have reason to believe that it was facing the same featureless sandy terrain challenges [as in the previous flight],” said Tzanetos. “And because of the navigation challenges, we had a rotor strike with the surface that would have resulted in a power brownout which caused the communications loss.” Grip describes what he thinks happened in more detail:

Some of this is speculation because of the sparse telemetry that we have, but what we see in the telemetry is that coming down towards the last part of the flight, on the sand, when we’re closing in on the ground, the helicopter relatively quickly starts to think that it’s moving horizontally away from the landing target. It’s likely that it made an aggressive maneuver to try to correct that right upon landing. And that would have accounted for a sideways motion and tilt of the helicopter that could have led to either striking the blade to the ground and then losing power, or making a maneuver that was aggressive enough to lose power before touching down and striking the blade, we don’t know those details yet. We may never know. But we’re trying as hard as we can with the data that we have to figure out those details.

When the Ingenuity team tried reestablishing contact with the helicopter the next sol, “she was right there where we expected her to be,” Tzanetos said. “Solar panel currents were looking good, which indicated that she was upright.” In fact, everything was “green across the board.” That is, until the team started looking through the images from Ingenuity’s navigation camera, and spotted the shadow of the damaged lower blade. Even if that’s the only damage to Ingenuity, the whole rotor system is now both unbalanced and producing substantially less lift, and further flights will be impossible.

A closeup of the shadow of the damaged blade tip.NASA/JPL-Caltech

There’s always that piece in the back of your head that’s getting ready every downlink—today could be the last day, today could be the last day. So there was an initial moment, obviously, of sadness, seeing that photo come down and pop on screen, which gives us certainty of what occurred. But that’s very quickly replaced with happiness and pride and a feeling of celebration for what we’ve pulled off. Um, it’s really remarkable the journey that she’s been on and worth celebrating every single one of those sols. Around 9pm tonight Pacific time will mark 1000 sols that Ingenuity has been on the surface since her deployment from the Perseverance rover. So she picked a very fitting time to come to the end of her mission. —Teddy Tzanetos

The Ingenuity team is guessing that there’s damage to more than one of the helicopter’s blades; the blades spin fast enough that if one hit the surface, others likely did too. The plan is to attempt to slowly spin the blades to bring others into view to try and collect more information. It sounds unlikely that NASA will divert the Perseverance rover to give Ingenuity a closer look; while continuing on its sincere mission the rover will come between 200 and 300 meters of Ingenuity and will try to take some pictures, but that’s likely too far away for a good quality image.

Perseverance watches Ingenuity take off on flight 47 on March 14, 2023.NASA/JPL-Caltech/ASU/MSSS

As a tech demo, Ingenuity’s entire reason for existence was to push the boundaries of what’s possible. And as Grip explains, even in its last flight, the little helicopter was doing exactly that, going above and beyond and trying newer and riskier things until it got as far as it possibly could:

Overall, the way that Ingenuity has navigated using features of terrain has been incredibly successful. We didn’t design this system to handle this kind of terrain, but nonetheless it’s sort of been invincible until this moment where we flew in this completely bland terrain where you just have nothing to really hold on to. So there are some lessons in that for us: we now know that that particular kind of terrain can be a trap for a system like this. Backing up when encountering this featureless terrain is a functionality that a future helicopter could be equipped with. And then there are solutions like having a higher resolution camera, which would have likely helped mitigate this situation. But it’s all part of this tech demo, where we equipped this helicopter to do at most five flights in a pre-scouted area and it’s gone on to do so much more than that. And we just worked it all the way up to the line, and then just tipped it right over the line to where it couldn’t handle it anymore.

Arguably, Ingenuity’s most important contribution has been showing that it’s not just possible, but practical and valuable to have rotorcraft on Mars. “I don’t think we’d be talking about sample recovery helicopters if Ingenuity didn’t fly, period, and if it hadn’t survived for as long as it has,” Teddy Tzanetos told us after Ingenuity’s 50th flight. And it’s not just the sample return mission: JPL is also developing a much larger Mars Science Helicopter, which will owe its existence to Ingenuity’s success.

Nearly three years on Mars. 128 minutes and 11 miles of flight in the Martian skies. “I look forward to the day that one of our astronauts brings home Ingenuity and we can all visit it in the Smithsonian,” said Director of JPL Laurie Leshin at the end of today’s press conference.

I’ll be first in line.

We’ve written extensively about Ingenuity, including in-depth interviews with both helicopter and rover team members, and they’re well worth re-reading today. Thanks, Ingenuity. You did well.


What Flight 50 Means for the Ingenuity Mars Helicopter

Team lead Teddy Tzanetos on the helicopter’s milestone aerial mission


Mars Helicopter Is Much More Than a Tech Demo

A Mars rover driver explains just how much of a difference the little helicopter scout is making to Mars exploration


Ingenuity’s Chief Pilot Explains How to Fly a Helicopter on Mars

Simulation is the secret to flying a helicopter on Mars


How NASA Designed a Helicopter That Could Fly Autonomously on Mars

The Perseverance rover’s Mars Helicopter (Ingenuity) will take off, navigate, and land on Mars without human intervention

A planetary exploration rover has been employed for scientific endeavors or as a precursor for upcoming manned missions. Predicting rover traversability from its wheel slip ensures safe and efficient autonomous operations of rovers on deformable planetary surfaces; path planning algorithms that reduce slips by considering wheel-soil interaction or terrain data can minimize the risk of the rover becoming immobilized. Understanding wheel-soil interaction in transient states is vital for developing a more precise slip ratio prediction model, while path planning in the past assumes that slips generated at the path is a series of slip ratio in steady state. In this paper, we focus on the transient slip, or slip rate the time derivative of slip ratio, to explicitly address it into the cost function of path planning algorithm. We elaborated a regression model that takes slip rate and traction force as inputs and outputs slip ratio, which is employed in the cost function to minimize the rover slip in path planning phase. Experiments using a single wheel testbed revealed that even with the same wheel traction force, the slip ratio varies with different slip rates; we confirmed that the smaller the absolute value of the slip rate, the larger the slip ratio for the same traction force. The statistical analysis of the regression model confirms that the model can estimate the slip ratio within an accuracy of 85% in average. The path planning simulation with the regression model confirmed a reduction of 58% slip experienced by the rover when driving through rough terrain environments. The dynamics simulation results insisted that the proposed method can reduce the slip rate in rough terrain environments.

Active upper limb exoskeletons are a potentially powerful tool for neuromotor rehabilitation. This potential depends on several basic control modes, one of them being transparency. In this control mode, the exoskeleton must follow the human movement without altering it, which theoretically implies null interaction efforts. Reaching high, albeit imperfect, levels of transparency requires both an adequate control method and an in-depth evaluation of the impacts of the exoskeleton on human movement. The present paper introduces such an evaluation for three different “transparent” controllers either based on an identification of the dynamics of the exoskeleton, or on force feedback control or on their combination. Therefore, these controllers are likely to induce clearly different levels of transparency by design. The conducted investigations could allow to better understand how humans adapt to transparent controllers, which are necessarily imperfect. A group of fourteen participants were subjected to these three controllers while performing reaching movements in a parasagittal plane. The subsequent analyses were conducted in terms of interaction efforts, kinematics, electromyographic signals and ergonomic feedback questionnaires. Results showed that, when subjected to less performing transparent controllers, participants strategies tended to induce relatively high interaction efforts, with higher muscle activity, which resulted in a small sensitivity of kinematic metrics. In other words, very different residual interaction efforts do not necessarily induce very different movement kinematics. Such a behavior could be explained by a natural human tendency to expend effort to preserve their preferred kinematics, which should be taken into account in future transparent controllers evaluation.



Over the past few weeks, we’ve seen a couple of high-profile videos of robotic systems doing really impressive things. And I mean, that’s what we’re all here for, right? Being impressed by the awesomeness of robots! But sometimes the awesomeness of robots is more complicated than what you see in a video making the rounds on social media—any robot has a lot of things going on behind the scenes to make it successful, but if you can’t tell what those things are, what you see at first glance might be deceiving you.

Earlier this month, a group of researchers from Stanford’s IRIS Lab introduced Mobile ALOHA, which (if you read the YouTube video description) is described as “a low-cost and whole-body teleoperation system for data collection”:

And just last week, Elon Musk posted a video of Tesla’s Optimus robot folding a shirt:

— (@)

Most people who watch these videos without poking around in the descriptions or comments will likely not assume that these robots were being entirely controlled by experienced humans, because why would they? Even for roboticists, it can be tricky to know for sure whether the robot they’re watching has a human in the loop somewhere. This is a problem that’s not unique to the folks behind either of the videos above; it’s a communication issue that the entire robotics community struggles with. But as robots (and robot videos) become more mainstream, it’s important that we get better at it.

Why use teleoperation?

Humans are way, way, way, way, way better than robots at almost everything. We’re fragile and expensive, which is why so many people are trying to get robots to do stuff instead, but with a very few exceptions involving speed and precision, humans are the gold standard and are likely to remain so for the foreseeable future. So, if you need a robot to do something complicated or something finicky or something that might require some innovation or creativity, the best solution is to put a human in control.

What about autonomy, though?

Having one-to-one human teleoperation of a robot is a great way of getting things done, but it’s not scalable, and aside from some very specific circumstances, the whole point of robots is to do stuff autonomously at scale so that humans don’t have to. One approach to autonomy is to learn as much as you can from human teleoperation: Many robotics companies are betting that they’ll be able to use humans to gradually train their robotic systems, transitioning from full teleoperation to partial teleoperation to supervisory control to full autonomy. Sanctuary AI is a great example of this: They’ve been teleoperating their humanoid robots through all kinds of tasks, collecting training data as a foundation for later autonomy.

What’s wrong with teleoperation, then?

Nothing! Teleoperation is great. But when people see a robot doing something and it looks autonomous but it’s actually teleoperated, that’s a problem, because it’s a misrepresentation of the state of the technology. Not only do people end up with the wrong idea of how your robot functions and what it’s really capable of, it also means that whenever those people see other robots doing similar tasks autonomously, their frame of reference will be completely wrong, minimizing what otherwise may be a significant contribution to the field by other robotics folks. To be clear, I don’t (usually) think that the roboticists making these videos have any intention of misleading people, but that is unfortunately what often ends up happening.

What can we do about this problem?

Last year, I wrote an article for the IEEE Robotics & Automation Society (RAS) with some tips for making a good robot video, which includes arguably the most important thing: context. This covers teleoperation, along with other common things that can cause robot videos to mislead an unfamiliar audience. Here’s an excerpt from the RAS article:

It’s critical to provide accurate context for videos of robots. It’s not always clear (especially to nonroboticists) what a robot may be doing or not doing on its own, and your video should be as explicit as possible about any assistance that your system is getting. For example, your video should identify:

  • If the video has been sped up or slowed down
  • If the video makes multiple experiments look like one continuous experiment
  • If external power, compute, or localization is being used
  • How the robot is being controlled (e.g., human in the loop, human supervised, scripted actions, partial autonomy, full autonomy)

These things should be made explicit on the video itself, not in the video description or in captions. Clearly communicating the limitations of your work is the responsible thing to do, and not doing this is detrimental to the robotics community.

I want to emphasize that context should be made explicit on the video itself. That is, when you edit the video together, add captions or callouts or something that describes the context on top of the actual footage. Don’t put it in the description or in the subtitles or in a link, because when videos get popular online, they may be viewed and shared and remixed without any of that stuff being readily available.

So how can I tell if a robot is being teleoperated?

If you run across a video of a robot doing some kind of amazing manipulation task and aren’t sure whether it’s autonomous or not, here are some questions to ask that might help you figure it out.

  • Can you identify an operator? In both of the videos we mentioned above, if you look very closely, you can tell that there’s a human operator, whether it’s a pair of legs or a wayward hand in a force-sensing glove. This may be the first thing to look for, because sometimes an operator is very obvious, but at the same time, not seeing an operator isn’t particularly meaningful because it’s easy for them to be out of frame.
  • Is there any more information? The second thing to check is whether the video says anywhere what’s actually going on. Does the video have a description? Is there a link to a project page or paper? Are there credits at the end of the video? What account is publishing the video? Even if you can narrow down the institution or company or lab, you might be able to get a sense of whether they’re working on autonomy or teleoperation.
  • What kind of task is it? You’re most likely to see teleoperation in tasks that would be especially difficult for a robot to do autonomously. At the moment, that’s predominantly manipulation tasks that aren’t well structured—for example, getting multiple objects to interact with each other, handling things that are difficult to model (like fabrics), or extended multistep tasks. If you see a robot doing this stuff quickly and well, it’s worth questioning whether it’s autonomous.
  • Is the robot just too good? I always start asking more questions when a robot demo strikes me as just too impressive. But when does impressive become too impressive? Personally, I think a robot demonstrating human-level performance at just about any complex task is too impressive. Some autonomous robots definitely have reached that benchmark, but not many, and the circumstances of them doing so are usually atypical. Furthermore, it takes a lot of work to reach humanlike performance with an autonomous system, so there’s usually some warning in the form of previous work. If you see an impressive demo that comes out of nowhere, showcasing an autonomous capability without any recent precedents, that’s probably too impressive. Remember that it can be tricky with a video because you have no idea whether you’re watching the first take or the 500th, and that itself is a good thing to be aware of—even if it turns out that a demo is fully autonomous, there are many other ways of obfuscating how successful the system actually is.
  • Is it too fast? Autonomous robots are well known for being very fast and precise, but only in the context of structured tasks. For complex manipulation tasks, robots need to sense their environment, decide what to do next, and then plan how to move. This takes time. If you see an extended task that consists of multiple parts but the system never stops moving, that suggests it’s not fully autonomous.
  • Does it move like a human? Robots like to move optimally. Humans might also like to move optimally, but we’re bad at it. Autonomous robots tend to move smoothly and fluidly, while teleoperated robots often display small movements that don’t make sense in the context of the task, but are very humanlike in nature. For example, finger motions that are unrelated to gripping, or returning an arm to a natural rest position for no particular reason, or being just a little bit sloppy in general. If the motions seem humanlike, that’s usually a sign of a human in the loop rather than a robot that’s just so good at doing a task that it looks human.

None of these points make it impossible for an autonomous robot demo to come out of nowhere and blow everyone away. Improbable, perhaps, but not impossible. And the rare moments when that actually happens is part of what makes robotics so exciting. That’s why it’s so important to understand what’s going on when you see a robot doing something amazing, though—knowing how it’s done, and all of the work that went into it, can only make it more impressive.

This article was inspired by Peter Corke‘s LinkedIn post, What’s with all these deceptive teleoperation demos? And extra thanks to Peter for his feedback on an early draft of this article.



Over the past few weeks, we’ve seen a couple of high-profile videos of robotic systems doing really impressive things. And I mean, that’s what we’re all here for, right? Being impressed by the awesomeness of robots! But sometimes the awesomeness of robots is more complicated than what you see in a video making the rounds on social media—any robot has a lot of things going on behind the scenes to make it successful, but if you can’t tell what those things are, what you see at first glance might be deceiving you.

Earlier this month, a group of researchers from Stanford’s IRIS Lab introduced Mobile ALOHA, which (if you read the YouTube video description) is described as “a low-cost and whole-body teleoperation system for data collection”:

And just last week, Elon Musk posted a video of Tesla’s Optimus robot folding a shirt:

— (@)

Most people who watch these videos without poking around in the descriptions or comments will likely not assume that these robots were being entirely controlled by experienced humans, because why would they? Even for roboticists, it can be tricky to know for sure whether the robot they’re watching has a human in the loop somewhere. This is a problem that’s not unique to the folks behind either of the videos above; it’s a communication issue that the entire robotics community struggles with. But as robots (and robot videos) become more mainstream, it’s important that we get better at it.

Why use teleoperation?

Humans are way, way, way, way, way better than robots at almost everything. We’re fragile and expensive, which is why so many people are trying to get robots to do stuff instead, but with a very few exceptions involving speed and precision, humans are the gold standard and are likely to remain so for the foreseeable future. So, if you need a robot to do something complicated or something finicky or something that might require some innovation or creativity, the best solution is to put a human in control.

What about autonomy, though?

Having one-to-one human teleoperation of a robot is a great way of getting things done, but it’s not scalable, and aside from some very specific circumstances, the whole point of robots is to do stuff autonomously at scale so that humans don’t have to. One approach to autonomy is to learn as much as you can from human teleoperation: Many robotics companies are betting that they’ll be able to use humans to gradually train their robotic systems, transitioning from full teleoperation to partial teleoperation to supervisory control to full autonomy. Sanctuary AI is a great example of this: They’ve been teleoperating their humanoid robots through all kinds of tasks, collecting training data as a foundation for later autonomy.

What’s wrong with teleoperation, then?

Nothing! Teleoperation is great. But when people see a robot doing something and it looks autonomous but it’s actually teleoperated, that’s a problem, because it’s a misrepresentation of the state of the technology. Not only do people end up with the wrong idea of how your robot functions and what it’s really capable of, it also means that whenever those people see other robots doing similar tasks autonomously, their frame of reference will be completely wrong, minimizing what otherwise may be a significant contribution to the field by other robotics folks. To be clear, I don’t (usually) think that the roboticists making these videos have any intention of misleading people, but that is unfortunately what often ends up happening.

What can we do about this problem?

Last year, I wrote an article for the IEEE Robotics & Automation Society (RAS) with some tips for making a good robot video, which includes arguably the most important thing: context. This covers teleoperation, along with other common things that can cause robot videos to mislead an unfamiliar audience. Here’s an excerpt from the RAS article:

It’s critical to provide accurate context for videos of robots. It’s not always clear (especially to nonroboticists) what a robot may be doing or not doing on its own, and your video should be as explicit as possible about any assistance that your system is getting. For example, your video should identify:

  • If the video has been sped up or slowed down
  • If the video makes multiple experiments look like one continuous experiment
  • If external power, compute, or localization is being used
  • How the robot is being controlled (e.g., human in the loop, human supervised, scripted actions, partial autonomy, full autonomy)

These things should be made explicit on the video itself, not in the video description or in captions. Clearly communicating the limitations of your work is the responsible thing to do, and not doing this is detrimental to the robotics community.

I want to emphasize that context should be made explicit on the video itself. That is, when you edit the video together, add captions or callouts or something that describes the context on top of the actual footage. Don’t put it in the description or in the subtitles or in a link, because when videos get popular online, they may be viewed and shared and remixed without any of that stuff being readily available.

So how can I tell if a robot is being teleoperated?

If you run across a video of a robot doing some kind of amazing manipulation task and aren’t sure whether it’s autonomous or not, here are some questions to ask that might help you figure it out.

  • Can you identify an operator? In both of the videos we mentioned above, if you look very closely, you can tell that there’s a human operator, whether it’s a pair of legs or a wayward hand in a force-sensing glove. This may be the first thing to look for, because sometimes an operator is very obvious, but at the same time, not seeing an operator isn’t particularly meaningful because it’s easy for them to be out of frame.
  • Is there any more information? The second thing to check is whether the video says anywhere what’s actually going on. Does the video have a description? Is there a link to a project page or paper? Are there credits at the end of the video? What account is publishing the video? Even if you can narrow down the institution or company or lab, you might be able to get a sense of whether they’re working on autonomy or teleoperation.
  • What kind of task is it? You’re most likely to see teleoperation in tasks that would be especially difficult for a robot to do autonomously. At the moment, that’s predominantly manipulation tasks that aren’t well structured—for example, getting multiple objects to interact with each other, handling things that are difficult to model (like fabrics), or extended multistep tasks. If you see a robot doing this stuff quickly and well, it’s worth questioning whether it’s autonomous.
  • Is the robot just too good? I always start asking more questions when a robot demo strikes me as just too impressive. But when does impressive become too impressive? Personally, I think a robot demonstrating human-level performance at just about any complex task is too impressive. Some autonomous robots definitely have reached that benchmark, but not many, and the circumstances of them doing so are usually atypical. Furthermore, it takes a lot of work to reach humanlike performance with an autonomous system, so there’s usually some warning in the form of previous work. If you see an impressive demo that comes out of nowhere, showcasing an autonomous capability without any recent precedents, that’s probably too impressive. Remember that it can be tricky with a video because you have no idea whether you’re watching the first take or the 500th, and that itself is a good thing to be aware of—even if it turns out that a demo is fully autonomous, there are many other ways of obfuscating how successful the system actually is.
  • Is it too fast? Autonomous robots are well known for being very fast and precise, but only in the context of structured tasks. For complex manipulation tasks, robots need to sense their environment, decide what to do next, and then plan how to move. This takes time. If you see an extended task that consists of multiple parts but the system never stops moving, that suggests it’s not fully autonomous.
  • Does it move like a human? Robots like to move optimally. Humans might also like to move optimally, but we’re bad at it. Autonomous robots tend to move smoothly and fluidly, while teleoperated robots often display small movements that don’t make sense in the context of the task, but are very humanlike in nature. For example, finger motions that are unrelated to gripping, or returning an arm to a natural rest position for no particular reason, or being just a little bit sloppy in general. If the motions seem humanlike, that’s usually a sign of a human in the loop rather than a robot that’s just so good at doing a task that it looks human.

None of these points make it impossible for an autonomous robot demo to come out of nowhere and blow everyone away. Improbable, perhaps, but not impossible. And the rare moments when that actually happens is part of what makes robotics so exciting. That’s why it’s so important to understand what’s going on when you see a robot doing something amazing, though—knowing how it’s done, and all of the work that went into it, can only make it more impressive.

This article was inspired by Peter Corke‘s LinkedIn post, What’s with all these deceptive teleoperation demos? And extra thanks to Peter for his feedback on an early draft of this article.



While organic thin-film transistors built on flexible plastic have been around long enough for people to start discussing a Moore’s Law for bendable ICs, memory devices for these flexible electronics have been a bit more elusive. Now researchers from Tsinghua University in China have developed a fully flexible resistive random access memory device, dubbed FlexRAM, that offers another approach: a liquid one.

In research described in the journal Advanced Materials, the researchers have used a gallium-based liquid metal to achieve FlexRAM’s data writing and reading process. In an example of biomimicry, the gallium-based liquid metal (GLM) droplets undergo oxidation and reduction mechanisms while in a solution environment that mimic the hyperpolarization and depolarization of neurons.

“This breakthrough fundamentally changes traditional notions of flexible memory, offering a theoretical foundation and technical path for future soft intelligent robots, brain-machine interface systems, and wearable/implantable electronic devices.”
—Jing Liu, Tsinghua University

These positive and negative bias voltages define the writing of information “1” and “0,” respectively. When a low voltage is applied, the liquid metal is oxidized, corresponding to the high-resistance state of “1.” By reversing the voltage polarity, it returns the metal to its initial low-resistance state of “0.” This reversible switching process allows for the storage and erasure of data.

To showcase the reading and writing capabilities of FlexRAM, the researchers integrated it into a software and hardware setup. Through computer commands, they encoded a string of letters and numbers, represented in the form of 0s and 1s, onto an array of eight FlexRAM storage units, equivalent to one byte of data information. The digital signal from the computer underwent conversion into an analog signal using pulse-width modulation to precisely control the oxidation and reduction of the liquid metal.

Photographs of the oxidation and reduction state of the gallium-based liquid metal at the heart of FlexRAM.Jing Liu/Tsinghua University

The present prototype is a volatile memory, according to Jing Liu, a professor at the Department of Biomedical Engineering at Tsinghua University. But Liu contends that the memory principle allows for the development of the device into different forms of memory.

This contention is supported by the unusual phenomenon that the data stored in FlexRAM persists even when the power is switched off. In a low or no-oxygen environment, FlexRAM can retain its data for up to 43,200 seconds (12 hours). It also exhibits repeatable use, maintaining stable performance for over 3,500 cycles of operation.

“This breakthrough fundamentally changes traditional notions of flexible memory, offering a theoretical foundation and technical path for future soft intelligent robots, brain-machine interface systems, and wearable/implantable electronic devices,” said Liu.

The GLM droplets are encapsulated in Ecoflex, a stretchable biopolymer. Using a 3D printer, the researchers printed Ecoflex molds and injected gallium-based liquid metal droplets and a solution of polyvinyl acetate hydrogel separately into the cavities in the mold. The hydrogel not only prevents solution leakage but also enhances the mechanical properties of the device, increasing its resistance ratio.

“FlexRAM could be incorporated into entire liquid-based computing systems, functioning as a logic device.”
—Jing Liu, Tsinghua University

In the present prototype, an array of 8 FlexRAM units can store one byte of information.

At this conceptual demonstration stage, millimeter-scale resolution molding is sufficient for demonstration of its working principle, Liu notes.

“The conceivable size scale for these FlexRAM devices can range widely,” said Liu. “For example, the size for each of the droplet memory elements can be from millimeter to nano-scale droplets. Interestingly, as revealed by the present study, the smaller the droplet size, the more sensitive the memory response.”

This groundbreaking work paves the way for the realization of brain-like circuits, aligning with concepts proposed by researchers such as Stuart Parkin at IBM over a decade ago. “FlexRAM could be incorporated into entire liquid-based computing systems, functioning as a logic device,” Liu envisions.

As researchers and engineers continue to address challenges and refine the technology, the potential applications of FlexRAM in soft robotics, brain-machine interface systems, and wearable/implantable electronic could be significant.



While organic thin-film transistors built on flexible plastic have been around long enough for people to start discussing a Moore’s Law for bendable ICs, memory devices for these flexible electronics have been a bit more elusive. Now researchers from Tsinghua University in China have developed a fully flexible resistive random access memory device, dubbed FlexRAM, that offers another approach: a liquid one.

In research described in the journal Advanced Materials, the researchers have used a gallium-based liquid metal to achieve FlexRAM’s data writing and reading process. In an example of biomimicry, the gallium-based liquid metal (GLM) droplets undergo oxidation and reduction mechanisms while in a solution environment that mimic the hyperpolarization and depolarization of neurons.

“This breakthrough fundamentally changes traditional notions of flexible memory, offering a theoretical foundation and technical path for future soft intelligent robots, brain-machine interface systems, and wearable/implantable electronic devices.”
—Jing Liu, Tsinghua University

These positive and negative bias voltages define the writing of information “1” and “0,” respectively. When a low voltage is applied, the liquid metal is oxidized, corresponding to the high-resistance state of “1.” By reversing the voltage polarity, it returns the metal to its initial low-resistance state of “0.” This reversible switching process allows for the storage and erasure of data.

To showcase the reading and writing capabilities of FlexRAM, the researchers integrated it into a software and hardware setup. Through computer commands, they encoded a string of letters and numbers, represented in the form of 0s and 1s, onto an array of eight FlexRAM storage units, equivalent to one byte of data information. The digital signal from the computer underwent conversion into an analog signal using pulse-width modulation to precisely control the oxidation and reduction of the liquid metal.

Photographs of the oxidation and reduction state of the gallium-based liquid metal at the heart of FlexRAM.Jing Liu/Tsinghua University

The present prototype is a volatile memory, according to Jing Liu, a professor at the Department of Biomedical Engineering at Tsinghua University. But Liu contends that the memory principle allows for the development of the device into different forms of memory.

This contention is supported by the unusual phenomenon that the data stored in FlexRAM persists even when the power is switched off. In a low or no-oxygen environment, FlexRAM can retain its data for up to 43,200 seconds (12 hours). It also exhibits repeatable use, maintaining stable performance for over 3,500 cycles of operation.

“This breakthrough fundamentally changes traditional notions of flexible memory, offering a theoretical foundation and technical path for future soft intelligent robots, brain-machine interface systems, and wearable/implantable electronic devices,” said Liu.

The GLM droplets are encapsulated in Ecoflex, a stretchable biopolymer. Using a 3D printer, the researchers printed Ecoflex molds and injected gallium-based liquid metal droplets and a solution of polyvinyl acetate hydrogel separately into the cavities in the mold. The hydrogel not only prevents solution leakage but also enhances the mechanical properties of the device, increasing its resistance ratio.

“FlexRAM could be incorporated into entire liquid-based computing systems, functioning as a logic device.”
—Jing Liu, Tsinghua University

In the present prototype, an array of 8 FlexRAM units can store one byte of information.

At this conceptual demonstration stage, millimeter-scale resolution molding is sufficient for demonstration of its working principle, Liu notes.

“The conceivable size scale for these FlexRAM devices can range widely,” said Liu. “For example, the size for each of the droplet memory elements can be from millimeter to nano-scale droplets. Interestingly, as revealed by the present study, the smaller the droplet size, the more sensitive the memory response.”

This groundbreaking work paves the way for the realization of brain-like circuits, aligning with concepts proposed by researchers such as Stuart Parkin at IBM over a decade ago. “FlexRAM could be incorporated into entire liquid-based computing systems, functioning as a logic device,” Liu envisions.

As researchers and engineers continue to address challenges and refine the technology, the potential applications of FlexRAM in soft robotics, brain-machine interface systems, and wearable/implantable electronic could be significant.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

Cybathlon Challenges: 2 February 2024, ZURICHEurobot Open 2024: 8–11 May 2024, LA ROCHE-SUR-YON, FRANCEICRA 2024: 13–17 May 2024, YOKOHAMA, JAPANRoboCup 2024: 17–22 July 2024, EINDHOVEN, NETHERLANDS

Enjoy today’s videos!

You may not be familiar with Swiss-Mile, but you’d almost certainly recognize its robot: it’s the ANYmal with wheels on its feet that can do all kinds of amazing things. Swiss-Mile has just announced a seed round to commercialize these capabilities across quadrupedal platforms, including Unitree’s, which means it’s even affordable-ish!

It’s always so cool to see impressive robotics research move toward commercialization, and I’ve already started saving up for one of these of my own.

[ Swiss-Mile ]

Thanks Marko!

This video presents the capabilities of PAL Robotics’ TALOS robot as it demonstrates agile and robust walking using Model Predictive Control (MPC) references sent to a Whole-Body Inverse Dynamics (WBID) controller developed in collaboration with Dynamograde. The footage shows TALOS navigating various challenging terrains, including stairs and slopes, while handling unexpected disturbances and additional weight.

[ PAL Robotics ]

Thanks Lorna!

Do you want to create a spectacular bimanual manipulation demo? All it takes is this teleoperation system and a carefully cropped camera shot! This is based on the Mobile ALOHA system from Stanford that we featured in Video Friday last week.

[ AgileX ]

Wing is still trying to make the drone-delivery thing work, and it’s got a new, bigger drone to deliver even more stuff at once.

[ Wing ]

A lot of robotics research claims to be about search and rescue and disaster relief, but it really looks like RSL’s ANYmal can actually pull it off.

And here’s even more impressive video, along with some detail about how the system works.

[ Paper ]

This might be the most appropriate soundtrack for a robot video that I’ve ever heard.

Snakes have long captivated robotics researchers due to their effective locomotion, flexible body structure, and ability to adapt their skin friction to different terrains. While extensive research has delved into serpentine locomotion, there remains a gap in exploring rectilinear locomotion as a robotic solution for navigating through narrow spaces. In this study, we describe the fundamental principles of rectilinear locomotion and apply them to design a soft crawling robot using origami modules constructed from laminated fabrics.

[ SDU ]

We wrote about Fotokite’s innovative tethered drone seven or eight years ago, and it’s good to see the company is still doing solid work.

I do miss the consumer version, though.

[ Fotokite ]

[ JDP ] via [ Petapixel ]

This is SHIVAA the strawberry picking robot of DFKI Robotics Innovation Center. The system is being developed in the RoLand (Robotic Systems in Agriculture) project, coordinated by the #RoboticsInnovationCenter (RIC) of the DFKI Bremen. Within the project we design and develop a semi-autonomous, mobile system that is capable of harvesting strawberries independent of human interaction.

[ DFKI ]

On December 6, 2023, Demarcus Edwards talked to Robotics students as a speaker in the Undergraduate Robotics Pathways & Careers Speaker Series, which aims to answer the question: “What can I do with a robotics degree?”

[ Michigan Robotics ]

This movie, Loss of Sensation, was released in Russia in 1935. It seems to be the movie that really, really irritated Karel Čapek, because they made his “robots” into mechanical beings instead of biological ones.

[ IMDB ]

Pages