IEEE Spectrum Robotics

IEEE Spectrum
Subscribe to IEEE Spectrum Robotics feed IEEE Spectrum Robotics


Editor’s note: Last week, Amazon announced that it was acquiring iRobot for $1.7 billion, prompting questions about how iRobot’s camera-equipped robot vacuums will protect the data that they collect about your home. In September of 2017, we spoke with iRobot CEO Colin Angle about iRobot’s approach to data privacy, directly addressing many similar concerns. “The views expressed in the Q&A from 2017 remain true,” iRobot told us. “Over the past several years, iRobot has continued to do more to strengthen, and clearly define, its stance on privacy and security. It’s important to note that iRobot takes product security and customer privacy very seriously. We know our customers invite us into their most personal spaces—their homes—because they trust that our products will help them do more. We take that trust seriously."

The article from 7 September 2017 follows:

About a month ago, iRobot CEO Colin Angle mentioned something about sharing Roomba mapping data in an interview with Reuters. It got turned into a data privacy kerfuffle in a way that iRobot did not intend and (probably) did not deserve, as evidenced by their immediate clarification that iRobot will not sell your data or share it without your consent.

Data privacy is important, of course, especially for devices that live in your home with you. But as robots get more capable, the amount of data that they collect will increase, and sharing that data in a useful, thoughtful, and considerate way could make smart homes way smarter. To understand how iRobot is going to make this happen, we spoke with Angle about keeping your data safe, integrating robots with the future smart home, and robots that can get you a beer.

Colin Angle on . . .

How Roomba Can Help Your House Understand Itself

Why Robots Capture People’s Imagination

iRobot Is Not Going to Sell Your Data

Are Robots With Cameras a Good Idea?

“The Home Itself Is Turning Into a Robot"

The Beer-Fetching Robots Are Coming

Were you expecting such a strong reaction on data privacy when you spoke with Reuters?

Colin Angle: We were all a little surprised, but it gave us an opportunity to talk a little more explicitly about our plans on that front. In order for your house to work smarter, the house needs to understand itself. If you want to be able to say, “Turn on the lights in the kitchen,” then the home needs to be able to understand what the kitchen is, and what lights are in the kitchen. And if you want that to work with a third-party device, you need a trusted, customer-in-control mechanism to allow that to happen. So, it’s not about selling data, it’s about usefully linking together different devices to make your home actually smart. The interesting part is that the limiting factor in making your home intelligent isn’t AI, it’s context. And that’s what I was talking about to Reuters.

What kinds of data can my Roomba 980 collect about my home?

Angle: The robot uses its sensors [including its camera] to understand where it is and create visual landmarks, things that are visually distinctive that it can recognize again. As the robot explores the home as a vacuum, it knows where it is relative to where it started, and it creates a 2D map of the home. None of the images ever leave the robot; the only map information that leaves the robot would be if the customer says, “I would like to see where the robot went,” and then the map is processed into a prettier form and sent up to the cloud and to your phone. If you don’t want to see it, it stays on the robot and never leaves the robot.

Do you think that there’s a perception that these maps contain much more private information about our homes than they really do?

Angle: I think that if you look at [the map], you know exactly what it is. In the future, we’d like it to have more detail, so that you could give more sophisticated commands to the robot, from “Could you vacuum my kitchen?” in which case the robot needs to know where the kitchen is, to [in the future], “Go to the kitchen and get me a beer.” In that case, the robot needs to know where the kitchen is, where the refrigerator is, what a beer is, and how to grab it. We’re at a very benign point right now, and we’re trying to establish a foundation of trust with our customers about how they have control over their data. Over time, when we want our homes to be smarter, you’ll be able to allow your robot to better understand your home, so it can do things that you would like it to do, in a trusted fashion.

“Robots are viewed as creatures in the home. That’s both exciting and a little scary at the same time, because people anthropomorphize and attribute much more intelligence to them than they do to a smart speaker.”

Fundamentally, would the type of information that this sort of robot would be sharing with third parties be any more invasive than an Amazon Echo or Google Home?

Angle: Robots have this inherent explosive bit of interest, because they’re viewed as creatures in your home. That’s both exciting and a little scary at the same time, because people anthropomorphize and attribute much more intelligence to them than they do to a smart speaker. The amount of information that one of these robots collect is, in many ways, much less, but because it moves, it really captures people’s imagination.

Why do you think people seem to be more concerned about the idea of robots sharing their data?

Angle: I think it’s the idea that you’d have a “robot spy” in your home. Your home is your sanctuary, and people rightfully want their privacy. If we have something gathering data in their home, we’re beyond the point where a company can exploit their customers by stealthily gathering data and selling it to other people. The things you buy and place in your home are there to benefit you, not some third party. That was the fear that was unleashed by this idea of gathering and selling data unbeknownst to the customer. At iRobot, we’ve said, “Look, we’re not going to do this, we’re not going to sell your data.” We don’t even remember your map unless you tell us we can. Our very explicit strategy is building this trusted relationship with our customers, so that they feel good about the benefits that Roomba has.

How could robots like Roomba eventually come to understand more about our homes to enable more sophisticated functionality?

Angle: We’re in the land of R&D here, not Roomba products, but certainly there exists object-recognition technology that can determine what a refrigerator is, what a television is, what a table is. It would be pretty straightforward to say, if the room contains a refrigerator and an oven, it’s probably the kitchen. If a room has a bed, it’s probably a bedroom. You’ll never be 100 percent right, but rooms have purposes, and we’re certainly on a path where just by observing, a robot could identify a room.

What else do you think a robot like a Roomba could ultimately understand about your home?

Angle: We’re working on a number of things, some of which we’re happy to talk about and some of which less so at this point in time. But, why should your thermostat be on the wall? Why is one convenient place on the wall of one room the right place to measure temperature from, as opposed to where you like to sit? When you get into home control, your sensor location can be critically tied to the operation of your home. The opportunity to have the robot carry sensors with it around the home would allow the expansion from a point reading to a 2D or 3D map of those readings. As a result, the customer has a lot more control over [for example] how loud the stereo system is at a specific point, or what the temperature is at a specific point. You could also detect anomalies in the home if things are not working the way the customer would like them to work. Those are some simple examples of why moving a sensor around would matter.

“There’s a pretty sizeable market for webcams in the home. People are interested in security and intruder detection, and also in how their pets are doing. But invariably, what you want to see is not what your camera is pointing at. That’s something where a robot makes a lot more sense.”

Another good example would be, there’s actually a pretty sizeable market for webcams in the home. People are interested in security and intruder detection, and also in how their pets are doing. But invariably, what you want to see is not what your camera is currently pointing at. Some people fill up their homes with cameras, or you put a camera on a robot, and it moves to where you want to look. That’s something where a robot makes a lot more sense, and it’s interesting, if I want to have privacy in our home and yet still have a camera I can use, it’s actually a great idea to put one on a robot, because when the robot isn’t in the room with you, it can’t see you. So, the metaphor is a lot closer to if you had a butler in your home— when they’re not around, you can have your privacy back. This is a metaphor that I think works really well as we try to architect smart homes that are both aware of themselves, and yet afford privacy.

So a mobile robot equipped with sensors and mapping technology to be able to understand your home in this way could act like a smart home manager?

Angle: A spatial information organizer. There’s a hub with a chunk of software that controls everything, and that’s not necessarily the same thing as what the robot would do. What Apple and Amazon and various smart home companies are doing are trying to build hubs where everything connects to them, but in order for these hubs to be actually smart, they need what I call spatial content: They need to understand what’s a room, and what’s in a room for the entire home. Ultimately, the home itself is turning into a robot, and if the robot’s not aware of itself, it can’t do the right things.

“A robot needs to understand what’s a room, and what’s in a room, for the entire home. Ultimately, the home itself is turning into a robot, and if the robot’s not aware of itself, it can’t do the right things.”

So, if you wanted to walk into a room and have the lights turn on and the heat come up, and if you started watching television and then left the room and wanted the television to turn off in the room you’d left and turn on in the room you’d gone to, all of those types of experiences where the home is seamlessly reacting to you require an understanding of rooms and what’s in each room. You can brute force that with lots of cameras and custom programming for your home, but I don’t believe that installations like this can be successful or scale. The solution where you own a Roomba anyway, and it just gives you all this information enabling your home to be smart, that’s an exciting vision of how we’re actually going to get smart homes.

What’s your current feeling about mobile manipulators in the home?

Angle: We’re getting there. In order for manipulation in the home to make sense, you need to know where you are, right? What’s the point of being able to get something if you don’t know where it is. So this idea that we need these maps that have information embedded in them about where stuff is and the ability to actually segment objects—there’s a hierarchy of understanding. You need to know where a room is as a first step. You need to identify where objects are—that’s recognition of larger objects. Then you need to be able to open a door, say, and now you’re processing larger objects to find handles that you can reach out and grab. The ability to do all of these things exists in research labs, and to an increasing degree in manufacturing facilities. We’re past the land of invention of the proof of principle, and into the land of, could we reduce this to a consumer price point that would make sense to people in the home. We’re well on the way—we will definitely see this kind of robot in, I would say, five to 10 years, we’ll have robots that can go and get you a beer. I don’t think it’s going to be a lot shorter than that, because we have a few steps to go, but it’s less invention and more engineering.

We should note that we spoke with Angle just before Neato announced their new D7 robot vacuum, which adds persistent, actionable maps, arguably the first step towards a better understanding of the home. Since Roombas already have the ability to make a similar sort of map, based on the sorts of things that Angle spoke about in our interview we’re expecting to see iRobot add a substantial amount of intelligence and functionality to the Roomba in the very near future.



This morning, Amazon and iRobot announced “a definitive merger agreement under which Amazon will acquire iRobot” for US $1.7 billion. The announcement was a surprise, to put it mildly, and we’ve barely had a chance to digest the news. But taking a look at what’s already known can still yield initial (if incomplete) answers as to why Amazon and iRobot want to team up—and whether the merger seems like a good idea.

The press release, like most press releases about acquisitions of this nature, doesn’t include much in the way of detail. But here are some quotes:

“We know that saving time matters, and chores take precious time that can be better spent doing something that customers love,” said Dave Limp, SVP of Amazon Devices. “Over many years, the iRobot team has proven its ability to reinvent how people clean with products that are incredibly practical and inventive—from cleaning when and where customers want while avoiding common obstacles in the home, to automatically emptying the collection bin. Customers love iRobot products—and I'm excited to work with the iRobot team to invent in ways that make customers' lives easier and more enjoyable.”“Since we started iRobot, our team has been on a mission to create innovative, practical products that make customers' lives easier, leading to inventions like the Roomba and iRobot OS,” said Colin Angle, chairman and CEO of iRobot. “Amazon shares our passion for building thoughtful innovations that empower people to do more at home, and I cannot think of a better place for our team to continue our mission. I’m hugely excited to be a part of Amazon and to see what we can build together for customers in the years ahead.”

There’s not much to go on here, and iRobot has already referred us to Amazon PR, which, to be honest, feels like a bit of a punch in the gut. I love (loved?) so many things about iRobot—their quirky early history working on weird DARPA projects and even weirder toys, everything they accomplished with the PackBot (and also this), and most of all, the fact that they’ve made a successful company building useful and affordable robots for the home, which is just…it’s so hard to do that I don’t even know where to start. And nobody knows what’s going to happen to iRobot going forward. I’m sure iRobot and Amazon have all kinds of plans and promises and whatnot, but still—I’m now nervous about iRobot’s future.

Why this is a good move for Amazon is clear, but what exactly is in it for iRobot?

It seems fairly obvious why Amazon wanted to get its hands on iRobot. Amazon has been working for years to integrate itself into homes, first with audio systems (Alexa), and then video (Ring), and more recently some questionable home robots of its own, like its indoor security drone and Astro. Amazon clearly needs some help in understanding how to make home robots useful, and iRobot can likely provide some guidance, with its extraordinarily qualified team of highly experienced engineers. And needless to say, iRobot is already well established in a huge number of homes, with brand recognition comparable to something like Velcro or Xerox, in the sense that people don’t have “robot vacuums,” they have Roombas.

All those Roombas in all of those homes are also collecting a crazy amount of data for iRobot. iRobot itself has been reasonably privacy-sensitive about this, but it would be naïve not to assume that Amazon sees a lot of potential for learning much, much more about what goes on in our living rooms. This is more concerning, because Amazon has its own ideas about data privacy, and it’s unclear what this will mean for increasingly camera-reliant Roombas going forward.

I get why this is a good move for Amazon, but I must admit that I’m still trying to figure out what exactly is in it for iRobot, besides of course that “$61 per share in an all-cash transaction valued at approximately $1.7 billion.” Which, to be fair, seems like a heck of a lot of money. Usually when these kinds of mergers happen (and I’m thinking back to Google acquiring all those robotics companies in 2013), the hypothetical appeal for the robotics company is that suddenly they have a bunch more resources to spend on exciting new projects along with a big support structure to help them succeed.

It’s true that iRobot has apparently had some trouble with finding ways to innovate and grow, with their biggest potential new consumer product (the Terra lawn mower) having been on pause since 2020. It could be that big pile of cash, plus not having to worry so much about growth as a publicly traded company, plus some new Amazon-ish projects to work on could be reason enough for this acquisition.

My worry, though, is that iRobot is just going to get completely swallowed into Amazon and effectively cease to exist in a meaningful and unique way. I hope that the relationship between Amazon and iRobot will be an exception to this historical trend. Plus, there is some precedent for this—Boston Dynamics, for example, has survived multiple acquisitions while keeping its technology and philosophy more or less independent and intact. It’ll be on iRobot to very aggressively act to preserve itself, and keeping Colin Angle as CEO is a good start.

We’ll be trying to track down more folks to talk to about this over the coming weeks for a more nuanced and in-depth perspective. In the meantime, make sure to give your Roomba a hug—it’s been quite a day for little round robot vacuums.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

IEEE CASE 2022: 20–24 August 2022, MEXICO CITYCLAWAR 2022: 12–14 September 2022, AZORES, PORTUGALANA Avatar XPRIZE Finals: 4–5 November 2022, LOS ANGELESCoRL 2022: 14–18 December 2022, AUCKLAND, NEW ZEALAND

Enjoy today's videos!

This probably counts as hard mode for Ikea chair assembly.

[ Naver Lab ]

As anyone working with robotics knows, it’s mandatory to spend at least 10 percent of your time just mucking about with them because it’s fun, as GITAI illustrates with its new 10-meter robotic arm.

[ GITAI ]

Well, this is probably the weirdest example of domain randomization in simulation for quadrupeds that I’ve ever seen.

[ RSL ]

The RoboCup 2022 was held in Bangkok, Thailand. The final match was between B-Human from Bremen (jerseys in black) and HTWK Robots from Leipzig (jerseys in blue). The video starts with one of our defending robots starting a duel with the opponent. After a short time a pass is made to another robot, which tries to score a goal, but the opponent goalie is able to catch the ball. Afterwards another attacker robot is already waiting at the center circle, to take its chance to score a goal, through all four opponent robots.

[ Team B-Human ]

The mission to return Martian samples back to Earth will see a European 2.5-meter-long robotic arm pick up tubes filled with precious soil from Mars and transfer them to a rocket for a historic interplanetary delivery.

[ ESA ]

I still cannot believe that this is an approach to robotic fruit-picking that actually works.

[ Tevel Aerobotics ]

This video shows the basic performance of the humanoid robot Torobo, which is used as a research platform for JST’s Moonshot R&D program.

[ Tokyo Robotics ]

Volocopter illustrates why I always carry two violins with me everywhere. You know, just in case.

[ Volocopter ]

We address the problem of enabling quadrupedal robots to perform precise shooting skills in the real world using reinforcement learning. Developing algorithms to enable a legged robot to shoot a soccer ball to a given target is a challenging problem that combines robot motion control and planning into one task.

[ Hybrid Robotics ]

I will always love watching Cassie try very, very hard to not fall over, and then fall over. <3

[ Michigan Robotics ]

I don’t think this paper is about teaching bipeds to walk with attitude, but it should be.

[ DLG ]

Modboats are capable of collective swimming in arbitrary configurations! In this video you can see three different configurations of the Modboats swim across our test space and demonstrate their capabilities.

[ ModLab ]

How have we built our autonomous driving technology to navigate the world safely? It comes down to three easy steps: Sense, Solve, and Go. Using a combination of lidar, camera, radar, and compute, the Waymo Driver can visualize the world, calculate what others may do, and proceed smoothly and safely, day and night.

[ Waymo ]

Alan Alda discusses evolutionary robotics with Hod Lipson and Jordan Pollack on Scientific American Frontiers in 1999.

[ Creative Machines Lab ]

Brady Watkins gives us insight into how a big company like Softbank Robotics looks into the robotics market.

[ Robohub ]



I don’t know about you, but being stuck at home during the pandemic made me realize two things. Thing the first: My entire life is disorganized. And thing the second: Organizing my life, and then keeping organized, is a pain in the butt. This is especially true for those of us stuck in apartments that are a bit smaller than we’d like them to be. With space at a premium, Mengni Zhang, a Ph.D. student at Cornell’s Architectural Robotics Lab, looked beyond floor space. Zhang wants to take advantage of wall space—even if it’s not easily reachable—using a small swarm of robot shelves that offer semiautonomous storage on demand.

During the pandemic I saw an increased number of articles advising people to clean up and declutter at home. We know the health benefits of maintaining an organized lifestyle, yet I could not find many empirical studies on understanding organizational behaviors, or examples of domestic robotic organizers for older adults or users with mobility impairments. There are already many assistive technologies, but most are floor based, which may not work so well for people living in small urban apartments. So, I tried to focus more on indoor wall-climbing robots, sort of like Roomba but on the wall.

The main goal was to quickly build a series of functional prototypes (here I call them SORT, which stands for “Self-Organizing Robot Team”) to conduct user studies to understand different people's preferences and perceptions toward this organizer concept. By helping people declutter and rearrange personal items on walls and delivering them to users as needed or wanted while providing some ambient interactions, I’m hoping to use these robots to improve quality of life and enhance our home environments.

This idea of intelligent architecture is a compelling one, I think—it’s sort of like the Internet of Things, except with an actuated physical embodiment that makes it more useful. Personally, I like to imagine hanging a coat on one of these little dudes and having it whisked up out of the way, or maybe they could even handle my bike, if enough of them work together. As Zhang points out, this concept could be especially useful for folks with disabilities who need additional workspace flexibility.

Besides just object handling, it’s easy to imagine these little robots operating as displays, as artwork, as sun-chasing planters, lights, speakers, or anything else. It’s just a basic proof of concept at the moment, and one that does require a fair amount of infrastructure to function in its current incarnation (namely, ferrous walls), but I certainly appreciate the researcher’s optimism in suggesting that “wall-climbing robots like the ones we present might become a next ‘killer app’ in robotics, providing assistance and improving life quality.”



This article is part of our exclusive IEEE Journal Watch series in partnership with IEEE Xplore.

The ocean contains a seemingly endless expanse of territory yet to be explored—and mapping out these uncharted waters globally poses a daunting task. Fleets of autonomous underwater robots could be invaluable tools to help with mapping, but these need to be able to navigate cluttered areas while remaining efficient and accurate.

In a study published 24 June in the IEEE Journal of Oceanic Engineering, one research team has developed a novel framework that allows autonomous underwater robots to map cluttered areas with high efficiency and low error rates.

A major challenge in mapping underwater environments is the uncertainty of the robot’s position.

“Because GPS is not available underwater, most underwater robots do not have an absolute position reference, and the accuracy of their navigation solution varies,” explains Brendan Englot, an associate professor of mechanical engineering at the Stevens Institute of Technology, in Hoboken, N.J., who was involved in the study. “Predicting how it will vary as a robot explores uncharted territory will permit an autonomous underwater vehicle to build the most accurate map possible under these challenging circumstances.”

The model created by Englot’s team uses a virtual map that abstractly represents the surrounding area that the robot hasn’t seen yet. They developed an algorithm that plans a route over this virtual map in a way that takes the robot's localization uncertainty and perceptual observations into account.

The perceptual observations are collected using sonar imaging, which helps detect objects in the environment in front of the robot within a 30-meter range and a 120-degree field of view. “We process the imagery to obtain a point cloud from every sonar image. These point clouds indicate where underwater structures are located relative to the robot,” explains Englot.

The research team then tested their approach using a BlueROV2 underwater robot in a harbor at Kings Point, N.Y., an area that Englot says was large enough to permit significant navigation errors to build up, but small enough to perform numerous experimental trials without too much difficulty. The team compared their model to several other existing ones, testing each model in at least three 30-minute trials in which the robot navigated the harbor. The different models were also evaluated through simulations.

“The results revealed that each of the competing [models] had its own unique advantages, but ours offered a very appealing compromise between exploring unknown environments quickly while building accurate maps of those environments,” says Englot.

He notes that his team has applied for a patent that would consider their model for subsea oil and gas production purposes. However, they envision that the model will also be useful for a broader set of applications, such as inspecting offshore wind turbines, offshore aquaculture infrastructure (including fish farms), and civil infrastructure, such as piers and bridges.

“Next, we would like to extend the technique to 3D mapping scenarios, as well as situations where a partial map may already exist, and we want a robot to make effective use of that map, rather than explore an environment completely from scratch,” says Englot. “If we can successfully extend our framework to work in 3D mapping scenarios, we may also be able to use it to explore networks of underwater caves or shipwrecks.”



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

IEEE CASE 2022: 20–24 August 2022, MEXICO CITYCLAWAR 2022: 12–14 September 2022, AZORES, PORTUGALANA Avatar XPRIZE Finals: 4–5 November 2022, LOS ANGELESCoRL 2022: 14–18 December 2022, AUCKLAND, NEW ZEALAND

Enjoy today’s videos!

A University of Washington team created a new tool that can design a 3D-printable passive gripper and calculate the best path to pick up an object. The team tested this system on a suite of 22 objects—including a 3D-printed bunny, a doorstop-shaped wedge, a tennis ball and a drill.

[ UW ]

Combining off-the-shelf components with 3D-printing, the Wheelbot is a symmetric reaction wheel unicycle that can jump onto its wheels from any initial position. With non-holonomic and under-actuated dynamics, as well as two coupled unstable degrees of freedom, the Wheelbot provides a challenging platform for nonlinear and data-driven control research.

[ Wheelbot ]

I think we posted a similar video to this before, but it’s so soothing and beautifully shot and this time it’s fully autonomous. Watch until the end for a very impressive catch!

[ Griffin ]

Quad-SDK is an open source, ROS-based full stack software framework for agile quadrupedal locomotion. The design of Quad-SDK is focused on the vertical integration of planning, control, estimation, communication, and development tools which enable agile quadrupedal locomotion in simulation and hardware with minimal user changes for multiple platforms.

[ Quad-SDK ]

Zenta makes some of the best legged robots out there, including MorpHex, which appears to be still going strong.

And now, a relaxing ride with MXPhoenix :

[ Zenta ]

We have developed a set of teleoperation strategies using human hand gestures and arm movements to fully teleoperate a legged manipulator through whole-body control. To validate the system, a pedal bin item disposal demo was conducted to show that the robot could exploit its kinematics redundancy to follow human commands while respecting certain motion constraints, such as keeping balance.

[ University of Leeds ]

Thanks Chengxu!

An introduction to HEBI’s Robotics line of modular mobile bases for confined spaces, dirty environments, and magnetic crawling.

[ HEBI Robotics ]

Thanks Kamal!

Loopy is a robotic swarm of 1- Degree of Freedom (DOF) agents (i.e., a closed-loop made of 36 Dynamixel servos). In this iteration of Loopy, agents use average consensus to determine the orientation of a received shape that requires the least amount of movement. In this video, Loopy is spelling out the Alphabet.

[ WVUIRL ]

The latest robotics from DLR, as shared by Bram Vanderborght.

[ DLR ]

Picking a specific object from clutter is an essential component of many manipulation tasks. Partial observations often require the robot to collect additional views of the scene before attempting a grasp. This paper proposes a closed-loop next-best-view planner that drives exploration based on occluded object parts.

[ Active Grasp ]

This novel flying system combines an autonomous unmanned aerial vehicle with ground penetrating radar to detect buried objects such as landmines. The system stands out with sensor specific low altitude flight maneuvers and high accuracy position estimation. Experiments show radar detections of targets buried in sand.

[ FindMine ]

In this experiment, we demonstrate a combined exploration and inspection mission using the RMF-Owl collision tolerant aerial robot inside the Nutec RelyOn facilities in Trondheim, Norway. The robot is tasked to autonomously explore and inspect the surfaces of the environment—within a height boundary—with its onboard camera sensor given no prior knowledge of the map.

[ NTNU ]

Delivering donuts to our incredible Turtlebot 4 development team! Includes a full walkthrough of the mapping and navigation capabilities of the Turtlebot 4 mobile robot with Maddy Thomson, Robotics Demo Designer from Clearpath Robotics. Create a map of your environment using SLAM Toolbox and learn how to get your Turtlebot 4 to navigate that map autonomously to a destination with the ROS 2 navigation stack.

[ Clearpath ]



NASA has announced a conceptual mission architecture for the Mars Sample Return (MSR) program, and it’s a pleasant surprise. The goal of the proposed program is to return the rock samples that the Perseverance rover is currently collecting on the Martian surface to Earth, which, as you can imagine, is not a simple process. It’ll involve sending a sample-return lander (SRL) to Mars, getting those samples back to the lander, launching a rocket back to Mars orbit from the lander, and finally capturing that rocket with an orbiter that’ll cart the samples back to Earth.

As you might expect, the initial idea was to send a dedicated rover to go grab the sample tubes from wherever Perseverance had cached them and bring them back to the lander with the rocket, because how else are you going to go get them, right? But NASA has decided that Plan A is for Perseverance to drive the samples to the SRL itself. Plan B, if Perseverance can’t make it, is to collect the samples with two helicopters instead.

NASA’s approach here is driven by two things: First, Curiosity has been on Mars for 10 years, and is still doing great. Perseverance is essentially an improved version of Curiosity, giving NASA confidence that the newer rover will still be happily roving by the time the SRL lands. And second, the Ingenuity helicopter is also still doing awesome, which is (let’s be honest) kind of a surprise, considering that it’s a tech demo that was never designed for the kind of performance that we’ve seen. NASA now seems to believe that helicopters are a viable tool for operating on the Martian surface, and therefore should be considered as an option for Mars operations.

In the new sample-return mission concept, Perseverance will continue collecting samples as it explores the river delta in Jezero crater. It’ll collect duplicates of each sample, and once it has 10 samples (20 tubes’ worth), it’ll cache the duplicates somewhere on the surface as a sort of backup plan. From there, Percy will keep exploring and collecting samples (but not duplicates) as it climbs out of the Jezero crater, where it’ll meet the sample-return lander in mid-2030. NASA says that the SRL will be designed with pinpoint landing capability, able to touch down within 50 meters of where NASA wants it to, meaning that a rendezvous with Perseverance should be a piece of cake—or as much of a piece of cake as landing on Mars can ever be. After Perseverance drives up to the SRL, a big arm on the SRL will pluck the sample tubes out of Perseverance and load them into a rocket, and then off they go to orbit and eventually back to Earth, probably by 2033.

The scenario described above is how everything is supposed to work, but it depends entirely on Perseverance doing what it’s supposed to do. If the rover is immobilized, the SRL will still be able to land nearby, but those sample tubes will have to get back to the SRL somehow, and NASA has decided that the backup plan will be helicopters.

The two “Ingenuity class” helicopters that the SRL will deliver to Mars will be basically the same size as Ingenuity, although a little bit heavier. There are two big differences: first, each helicopter gets a little arm for grabbing sample tubes (which weigh between 100 and 150 grams each) off of the Martian surface. And second, the helicopters get small wheels at the end of each of their legs. It sounds like these wheels will be powered, and while they’re not going to offer a lot of mobility, presumably it’ll be enough so that if the helicopter lands close to a sample, it can drive itself a short distance to get within grabbing distance. Here’s how Richard Cook, the Mars sample-return program manager at JPL, says the helicopters would work:

“In the scenario where the helicopters are used [for sample retrieval], each of the helicopters would be able to operate independently. They’d fly out to the [sample] depot location from where SRL landed, land in proximity to the sample tubes, roll over to them, pick one up, then fly back in proximity to the lander, roll up to the lander, and drop [the tube] onto the ground in a spot where the [European Space Agency] sample transfer arm could pick it up and put it into the [Mars Ascent Vehicle]. Each helicopter would be doing that separately, and over the course of four or five days per tube, would bring all the sample tubes back to the lander that way.”

This assumes that Perseverance didn’t explode or roll down a hill or something, and that it would be able to drop its sample tubes on the ground for the helicopters to pick up. Worst case, if Percy completely bites it, the SRL could land near the backup sample cache in the Jezero crater and the helicopters could grab those instead.

Weirdly, the primary mission of the helicopters is as a backup to Perseverance, meaning that if the rover is healthy and able to deliver the samples to the SRL itself, the helicopters won’t have much to do. NASA says they could “observe the area around the lander,” which seems underwhelming, or take pictures of the Mars Ascent Vehicle launch, which seems awesome but not really worth sending two helicopters to Mars for. I’m assuming that this’ll get explored a little more, because it seems like a potential wasted opportunity otherwise.

We’re hoping that this announcement won’t have any impact on JPL’s concept for a much larger, much more capable Mars Science Helicopter, but this sample-return mission (rather than a new science mission) is clearly the priority right now. The most optimistic way of looking at it is that this sample-return-mission architecture is a strong vote of confidence by NASA in helicopters on Mars in general, making a flagship helicopter mission that much more likely. But we’re keeping our fingers crossed.



Bugs have long taunted roboticists with how utterly incredible they are. Astonishingly mobile, amazingly efficient, super robust, and in some cases, literally dirt cheap. But making a robot that’s an insect equivalent is extremely hard—so hard that it’s frequently easier to just hijack living insects themselves and put them to work for us. You know what’s even easier than that, though?

Hijacking and repurposing dead bugs. Welcome to necrobotics.

Spiders are basically hydraulic (or pneumatic) grippers. Living spiders control their limbs by adjusting blood pressure on a limb-by-limb basis through an internal valve system. Higher pressure extends the limb, acting against an antagonistic flexor muscle that curls the limb when the blood pressure within is reduced. This, incidentally, is why spider legs all curl up when the spider shuffles off the mortal coil: There’s a lack of blood pressure to balance the force of the flexors.

This means that actuating all eight limbs of a spider that has joined the choir invisible is relatively straightforward. Simply stab it in the middle of that valve system, inject some air, and poof, all of the legs inflate and straighten.

Our strategy contrasts with bioinspired approaches in which researchers look to the spider’s physical morphology for design ideas that are subsequently implemented in complex engineered systems, and also differs from biohybrid systems in which live or active biological materials serve as the basis for a system, demanding careful and precise maintenance.

We repurposed the cadaver of a spider to create a pneumatically actuated gripper that is fully functional following only one simple assembly step, allowing us to circumvent the usual tedious and constraining fabrication steps required for fluidically driven actuators and grippers

This work, from researchers at the Preston Innovation Lab at Rice University, in Houston, is described in a paper just published in Advanced Science. In the paper, the team does a little bit of characterization of the performance of the deceased-spider gripper, and it’s impressive: It can lift 1.3 times its own weight, exert a peak gripping force of 0.35 millinewton, and can actuate at least 700 times before the limbs or the valve system start to degrade in any significant way. After 1,000 cycles, some cracks appear in the dead spider’s joins, likely because of dehydration. But the researchers think that by coating the spider in something like beeswax, they could likely forestall this breakdown a great deal. The demised-spider gripper is able to successfully pick up a variety of objects, likely because of a combination of the inherent compliance of the legs as well as hairlike microstructures on the legs that work kind of like a directional adhesive.

We are, unfortunately (although somewhat obviously), unable to say that no spiders were harmed over the course of this research. According to the paper, “the raw biotic material (i.e., the spider cadaver) was obtained by euthanizing a wolf spider through exposure to freezing temperature (approximately -4 °C) for a period of 5–7 days.” The researchers note that “there are currently no clear guidelines in the literature regarding ethical sourcing and humane euthanasia of spiders,” which is really something that should be figured out, considering how much we know about the cute-but-still-terrifying personalities some spiders have.

The wolf spider was a convenient choice because it exerts a gripping force approximately equal to its own weight, which raises the interesting question of what kind of performance could be expected from spiders of different sizes. Based on a scaling analysis, the researchers suggest that itty-bitty 10-milligram jumping spiders could exert a gripping force exceeding 200 percent of their body weight, while very much not itty-bitty 200-gram goliath spiders may only be able to grasp with a force that is 10 percent of their body weight. But that works out to 20 grams, which is still kind of terrifying. Goliath spiders are big.

For better or worse, insects seem likely to offer the most necrobotic potential, because fabricating pneumatics and joints and muscles at that scale can be very challenging, if not impossible. And spiders (as well as other spiderlike insects) in particular offer biodegradable, eco-friendly on-demand actuation with capabilities that the researchers hope to extend significantly. A capacitive proximity sensor could enable autonomy, for example, to “discreetly capture small biological creatures for sample collection in real-world scenarios.” Independent actuation of limbs could result in necrobotic locomotion. And the researchers are also planning to explore high-speed articulation with whip scorpions as well as true microscale manipulation with Patu digua spiders. I’ll let you google whip scorpion on your own because they kind of freak me out, but here’s a picture of a Patu digua, with a body measuring about a quarter of a millimeter:

Squee!



By 2050, the global population aged 65 or more will be nearly double what it is today. The number of people over the age of 80 will triple, approaching half a billion. Supporting an aging population is a worldwide concern, but this demographic shift is especially pronounced in Japan, where more than a third of Japanese will be 65 or older by midcentury.

Toyota Research Institute (TRI), which was established by Toyota Motor Corp. in 2015 to explore autonomous cars, robotics, and “human amplification technologies,” has also been focusing a significant portion of its research on ways to help older people maintain their health, happiness, and independence as long as possible. While an important goal in itself, improving self-sufficiency for the elderly also reduces the amount of support they need from society more broadly. And without technological help, sustaining this population in an effective and dignified manner will grow increasingly difficult—first in Japan, but globally soon after.

Toyota Research Institute

Gill Pratt, Toyota’s Chief Scientist and the CEO of TRI, believes that robots have a significant role to play in assisting older people by solving physical problems as well as providing mental and emotional support. With a background in robotics research and five years as a program manager at the Defense Advanced Research Projects Agency, during which time he oversaw the DARPA Robotics Challenge in 2015, Pratt understands how difficult it can be to bring robots into the real world in a useful, responsible, and respectful way. In an interview earlier this year in Washington, D.C., with IEEE Spectrum’s Evan Ackerman, he said that the best approach to this problem is a human-centric one: “It’s not about the robot, it’s about people.”

What are the important problems that we can usefully and reliably solve with home robots in the relatively near term?

Gill Pratt: We are looking at the aging society as the No. 1 market driver of interest to us. Over the last few years, we’ve come to the realization that an aging society creates two problems. One is within the home for an older person who needs help, and the other is for the rest of society—for younger people who need to be more productive to support a greater number of older people. The dependency ratio is the fraction of the population that works relative to the fraction that does not. As an example, in Japan, in not too many years, it’s going to get pretty close to 1:1. And we haven’t seen that, ever.

Solving physical problems is the easier part of assisting an aging society. The bigger issue is actually loneliness. This doesn’t sound like a robotics thing, but it could be. Related to loneliness, the key issue is having purpose, and feeling that your life is still worthwhile.

What we want to do is build a time machine. Of course we can’t do that, that’s science fiction, but we want to be able to have a person say, “I wish I could be 10 years younger” and then have a robot effectively help them as much as possible to live that kind of life.

There are many different robotic approaches that could be useful to address the problems you’re describing. Where do you begin?

Pratt: Let me start with an example, and this is one we talk about all of the time because it helps us think: Imagine that we built a robot to help with cooking. Older people often have difficulty with cooking, right?

Well, one robotic idea is to just cook meals for the person. This idea can be tempting, because what could be better than a machine that does all the cooking? Most roboticists are young, and most roboticists have all these interesting, exciting, technical things to focus on. And they think, “Wouldn’t it be great if some machine made my meals for me and brought me food so I could get back to work?”

But for an older person, what they would truly find meaningful is still being able to cook, and still being able to have the sincere feeling of “I can still do this myself.” It’s the time-machine idea—helping them to feel that they can still do what they used to be able to do and still cook for their family and contribute to their well-being. So we’re trying to figure out right now how to build machines that have that effect—that help you to cook but don’t cook for you, because those are two different things.

A robot for your home may not look much like this research platform, but it’s how TRI is learning to make home robots that are useful and safe. Tidying and cleaning are physically repetitive tasks that are ideal for home robots, but still a challenge since every home is different, and every person expects their home to be organized and cleaned differently.Toyota Research Institute

How can we manage this temptation to focus on solving technical problems rather than more impactful ones?

Pratt: What we have learned is that you start with the human being, the user, and you say, “What do they need?” And even though all of us love gadgets and robots and motors and amplifiers and hands and arms and legs and stuff, just put that on the shelf for a moment and say: “Okay. I want to imagine that I’m a grandparent. I’m retired. It’s not quite as easy to get around as when I was younger. And mostly I’m alone.” How do we help that person have a truly better quality of life? And out of that will occasionally come places where robotic technology can help tremendously.

A second point of advice is to try not to look for your keys where the light is. There’s an old adage about a person who drops their keys on the street at night, and so they go look for them under a streetlight, rather than the place they dropped them. We have an unfortunate tendency in the robotics field—and I’ve done it too—to say, “Oh, I know some mathematics that I can use to solve this problem over here.” That’s where the light is. But unfortunately, the problem that actually needs to get solved is over there, in the dark. It’s important to resist the temptation to use robotics as a vehicle for only solving problems that are tractable.

It sounds like social robots could potentially address some of these needs. What do you think is the right role for social robots for elder care?

Pratt: For people who have advanced dementia, things can be really, really tough. There are a variety of robotic-like things or doll-like things that can help a person with dementia feel much more at ease and genuinely improve the quality of their life. They sometimes feel creepy to people who don’t have that disability, but I believe that they’re actually quite good, and that they can serve that role well.

There’s another huge part of the market, if you want to think about it in business terms, where many people’s lives can be tremendously improved even when they’re simply retired. Perhaps their spouse has died, they don’t have much to do, and they're lonely and depressed. Typically, many of them are not technologically adept the way that their kids or their grandkids are. And the truth is their kids and their grandkids are busy. And so what can we really do to help?

Here there’s a very interesting dilemma, which is that we want to build a social-assistive technology, but we don’t want to pretend that the robot is a person. We’ve found that people will anthropomorphize a social machine, which shouldn’t be a surprise, but it’s very important to not cross a line where we are actively trying to promote the idea that this machine is actually real—that it’s a human being, or like a human being.

So there are a whole lot of things that we can do. The field is just beginning, and much of the improvement to people's lives can happen within the next 5 to 10 years. In the social robotics space, we can use robots to help connect lonely people with their kids, their grandkids, and their friends. We think this is a huge, untapped potential.

A robot for your home may not look much like this research platform, but it’s how TRI is learning to make home robots that are useful and safe. Perceiving and grasping transparent objects like drinking glasses is a particularly difficult task.Toyota Research Institute

Where do you draw the line with the amount of connection that you try to make between a human and a machine?

Pratt: We don’t want to trick anybody. We should be very ethically stringent, I think, to not try to fool anyone. People will fool themselves plenty—we don't have to do it for them.

To whatever extent that we can say, “This is your mechanized personal assistant,” that’s okay. It’s a machine, and it’s here to help you in a personalized way. It will learn what you like. It will learn what you don’t like. It will help you by reminding you to exercise, to call your kids, to call your friends, to get in touch with the doctor, all of those things that it's easy for people to miss on their own. With these sorts of socially assistive technologies, that’s the way to think of it. It’s not taking the place of other people. It’s helping you to be more connected with other people, and to live a healthier life because of that.

How much do you think humans should be in the loop with consumer robotic systems? Where might it be most useful?

Pratt: We should be reluctant to do person-behind-the-curtain stuff, although from a business point of view, we absolutely are going to need that. For example, say there's a human in an automated vehicle that comes to a double-parked car, and the automated vehicle doesn’t want to go around by crossing the double yellow line. Of course the vehicle should phone home and say, “I need an exception to cross the double yellow line.” A human being, for all kinds of reasons, should be the one to decide whether it’s okay to do the human part of driving, which is to make an exception and not follow the rules in this particular case.

However, having the human actually drive the car from a distance assumes that the communication link between the two of them is so reliable it’s as if the person is in the driver’s seat. Or, it assumes that the competence of the car to avoid a crash is so good that even if that communications link went down, the car would never crash. And those are both very, very hard things to do. So human beings that are remote, that perform a supervisory function, that’s fine. But I think that we have to be careful not to fool the public by making them think that nobody is in that front seat of the car, when there’s still a human driving—we’ve just moved that person to a place you can’t see.

In the robotics field, many people have spoken about this idea that we’ll have a machine to clean our house operated by a person in some part of the world where it would be good to create jobs. I think pragmatically it’s actually difficult to do this. And I would hope that the kinds of jobs we create are better than sitting at a desk and guiding a cleaning machine in someone’s house halfway around the world. It’s certainly not as physically taxing as having to be there and do the work, but I would hope that the cleaning robot would be good enough to clean the house by itself almost all the time and just occasionally when it’s stuck say, “Oh, I’m stuck, and I’m not sure what to do.” And then the human can help. The reason we want this technology is to improve quality of life, including for the people who are the supervisors of the machine. I don’t want to just shift work from one place to the other.

These bubble grippers are soft to the touch, making them safe for humans to interact with, but they also include the necessary sensing to be able to grasp and identify a wide variety of objects.Toyota Research Institute

Can you give an example of a specific technology that TRI is working on that could benefit the elderly?

Pratt: There are many examples. Let me pick one that is very tangible: the Punyo project.

In order to truly help elderly people live as if they are younger, robots not only need to be safe, they also need to be strong and gentle, able to sense and react to both expected and unexpected contacts and disturbances the way a human would. And of course, if robots are to make a difference in quality of life for many people, they must also be affordable.

Compliant actuation, where the robot senses physical contact and reacts with flexibility, can get us part way there. To get the rest of the way, we have developed instrumented, functional, low-cost compliant surfaces that are soft to the touch. We started with bubble grippers that have high-resolution tactile sensing for hands, and we are now adding compliant surfaces to all other parts of the robot's body to replace rigid metal or plastic. Our hope is to enable robot hardware to have the strength, gentleness, and physical awareness of the most able human assistant, and to be affordable by large numbers of elderly or disabled people.

What do you think the next DARPA challenge for robotics should be?

Pratt: Wow. I don’t know! But I can tell you what ours is [at TRI]. We have a challenge that we give ourselves right now in the grocery store. This doesn't mean we want to build a machine that does grocery shopping, but we think that trying to handle all of the difficult things that go on when you’re in the grocery store—picking things up even though there’s something right next to it, figuring out what the thing is even if the label that’s on it is half torn, putting it in the basket—this is a challenge task that will develop the same kind of capabilities we need for many other things within the home. We were looking for a task that didn’t require us to ask for 1,000 people to let us into their homes, and it turns out that the grocery store is a pretty good one. We have a hard time helping people to understand that it’s not about the store, it’s actually about the capabilities that let you work in the store, and that we believe will translate to a whole bunch of other things. So that’s the sort of stuff that we're doing work on.

As you’ve gone through your career from academia to DARPA and now TRI, how has your perspective on robotics changed?

Pratt: I think I’ve learned that lesson that I was telling you about before—I understand much more now that it’s not about the robot, it’s about people. And ultimately, taking this user-centered design point of view is easy to talk about, but it’s really hard to do.

As technologists, the reason we went into this field is that we love technology. I can sit and design things on a piece of paper and feel great about it, and yet I’m never thinking about who it is actually going to be for, and what am I trying to solve. So that’s a form of looking for your keys where the light is.

The hard thing to do is to search where it’s dark, and where it doesn’t feel so good, and where you actually say, “Let me first of all talk to a lot of people who are going to be the users of this product and understand what their needs are. Let me not fall into the trap of asking them what they want and trying to build that because that’s not the right answer.” So what I’ve learned most of all is the need to put myself in the user’s shoes, and to really think about it from that point of view.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

IEEE CASE 2022: 20–24 August 2022, MEXICO CITYCLAWAR 2022: 12–14 September 2022, AZORES, PORTUGALANA Avatar XPRIZE Finals: 4–5 November 2022, LOS ANGELESCoRL 2022: 14–18 December 2022, AUCKLAND, NEW ZEALAND

Enjoy today’s videos!

Introducing ARTEMIS: The next-generation humanoid robot platform to serve us for the next 10 years. This is a sneak peek of what is to come. Stay tuned!

[ RoMeLa ]

We approach the problem of learning by watching humans in the wild. We call our method WHIRL: In the Wild Human-Imitated Robot Learning. In WHIRL, we aim to use human videos to extract a prior over the intent of the demonstrator and use this to initialize our agent's policy. We introduce an efficient real-world policy learning scheme, that improves over the human prior using interactions. We show one-shot generalization, and success in real-world settings, including 20 different manipulation tasks in the wild.

[ CMU ]

I cannot believe that this system made it to the commercial pilot stage, but pretty awesome that it has, right?

[ Tevel ]

My favorite RoboCup event, where the world championship robots take on the RoboCup trustees!

[ RoboCup ]

WeRobotics is coordinating critical cargo-drone logistics in Madagascar with Aerial Metric, Madagascar Flying Labs, and PSI. This project serves to connect hard-to-reach rural communities with essential, life-saving medicines, including the delivery of just-in-time COVID-19 vaccines.

[ WeRobotics ]

With the possible exception of an octopus tentacle, the trunk of an elephant is the robotic manipulator we should all be striving for.

[ Georgia Tech ]

I don’t know if this ornithopter is more practical than a traditional drone, but it’s much more beautiful to watch.

[ GRVC ]

While I certainly appreciate the technical challenges of making drones that can handle larger payloads, I still feel like the actual challenge that Wing should be talking about is whether suburban drone delivery of low-value consumer goods is a sustainable business.

[ Wing ]

Microsoft Project AirSim provides a rich set of tools that enables you to rapidly create custom machine-learning capabilities. Realistic sensor models, pretrained neural networks, and extensible autonomy building blocks accelerate the training of aerial agents.

[ AirSim ]

Deep Robotics recently announced the official release of the Jueying X20 hazard-detection-and-rescue robot dog solution. With the flexibility to deliver unmanned detection-and-rescue services, Jueying X20 is designed for the complex terrain of a post-earthquake landscape, the insides of vulnerable debris buildings, tunnel traffic accidents, as well as the toxic, hypoxia, and high-density smoke environments created by chemical pollution or a fire disaster event.

[ Deep Robotics ]

Highlights from the RoboCup 2022 MSL Finals: Tech United vs. Falcons.

And here’s an overview of the wider event, from Tech United Eindhoven.

[ Tech United ]

One copter? Two copters!

[ SUTD ]

The Humanoid AdultSize RoboCup league is perhaps not the most dynamic, but it’s impressive anyway.

[ Nimbro ]

First autonomous mission for the PLaCE drone at sea, performing multispectral surveys and water-column measurements directly in situ, measuring characteristic biological parameters such as pH, chlorophyll, PAR, temperature, and salinity.

[ PRISMA Lab ]

Here’s one of the most interesting drones I’ve seen in a while: a sort of winged tricopter that can hover very efficiently by spinning.

[ Hackaday ]

Keep in mind that this is a paid promotion (and it’s not very technical at all), but it’s interesting to watch a commercial truck driver review an autonomous truck.

[ Plus ]

Curiosity has now been exploring Mars for 10 years (!) of its two-year mission.

[ JPL ]



Robust.AI was founded three years ago to do, well, nobody was quite sure what. Or rather, some people were quite sure what, and those people would be the Robust.AI team, who weren’t saying much. But it was that founding team that made us take notice, since it included highly experienced folks from SRI and Alphabet, as well as some big names in academia: Gary Marcus, Henrik Christensen, and Rodney Brooks.

A recent press release answered some questions about what Robust.AI has been working on but raised some questions as well, so we had a brief chat with Brooks himself to get all the details sorted out.

Here’s the important stuff from the press release:

Robust.AI, the company transforming how robots work for people, has unveiled its first software suite product, Grace, as well as a hardware product concept, Carter. Grace and Carter combine AI, robotics, and human-centered design to improve the way people and robots interact and work together. These technologies will be applied to the warehousing industry where more than 80 percent of warehouses operate without any automation, while labor shortages continue to wreak havoc on the global supply chain.

Grace is a software suite that enables dynamic coordination between people and robots in any warehouse with any workflow. The software can map out work between people and robots, allowing it to distribute and adapt as necessary. It provides situational awareness that makes robots capable in spaces where people move and work, including semantic and people perception as well as mapping and localization using cameras. Warehouse managers will be able to customize workflows, integrations, and behavior for an entire fleet of robots in minutes, through a no-code interface.

During the development of Grace, Robust.AI recognized that current Autonomous Mobile Robots (AMR) are slow to build, deploy, and adopt for many applications. This insight led to the development of Robust.AI’s first hardware product, Carter, a Collaborative Mobile Robot (CMR). Carter provides flexible automation for material handling in warehouses and beyond. As a CMR, Carter works around people without changing the environment, allowing for fluent coordination while increasing the engagement and productivity of people.

The obvious question here is how exactly Grace and Carter are different from existing AMRs, because at first glance, a lot of this is stuff that other AMR companies already do—adaptable cloud-based software infrastructure, working in the same spaces as busy squishy humans, and a mobile robot that can move stuff around. A mobile robot, it’s important to note, that doesn’t have any manipulators on it.

“We kept hearing that we were working on manipulators, and we’re not, and so it was important to let people know!” Brooks says. That’s the problem with staying in stealth mode for so long, but Rodney Brooks seems genuinely excited by what Robust.AI is doing, and (more important) that it’s something unique, emphasizing that “both Grace and Carter we believe offer features that aren’t available on other systems.”

Let’s start with the software stack, Grace. Initially, a user can simply walk through their work environment with a tablet to build a VSLAM (visual simultaneous localization and mapping) model of the world that the robots then use, which, Brooks says, eliminates the need to joystick a robot around in order to generate an initial map. It’s not so much that joysticking around is difficult, but typically it’s something that AMR companies do as part of a deployment, and Robust.AI is hoping to lower that barrier to adoption by removing the requirement for a custom setup process. Users then label places and regions to tell the robots where to go, and the Grace system allocates people and robots to tasks. “You can get the whole system up and running in the cloud without having to reengineer anything,” Brooks says. “Eighty percent of the warehouses in the United States have zero automation, and we want to start getting them automated.”

Carter (“Cart-er,” get it?) is a mobile robot about the size of a shopping cart. It has no lidar, but it’s full of cameras (16 on the current version), with neural processors running models at each camera so that the only data that needs to be passed around is semantic information along with coordinates.

“The key thing about Carter is that if you grab its handlebar, it’s now in power-assist mode, and you can just move it anywhere you want,” Brooks says. “With most other AMRs, even ones with good sensing, they go where they want, and the best you can do as a human is dance with them to try to block them. The important thing is that human workers are not separate from the automation—they get to use the automation.”

“[Our robots] really look at people, and they understand people. We have started to do some fairly simple semantic understanding of what people are doing, and make some predictions about what their behavior is going to be next. Our robots are deferential to people, instead of acting like they own the place. We want them to be friendly to the people who work there.” —Rodney Brooks

In a recent talk, Brooks describes what Robust.AI is going for with its system in terms of bears, goats, and dogs. You know, like this:

AGVs are like bears, because the prudent thing to do is to be physically separated from them—getting near them can be dangerous, and even in the best of cases, they’re not friendly to people. Most AMRs, meanwhile, are like goats. A goat is unlikely to eat you, and you can probably spend time around it without being mauled, but (depending on the goat) you’re likely to get somewhat annoyed by typical goat behavior. The goat will do its goaty things, and see you as little more than an obstacle or a source of treats or something to headbutt.

Robust.AI, meanwhile, is building what it's calling “Collaborative Mobile Robots,” which Brooks envisions as behaving more like service dogs. CMRs will pay attention to what you’re doing, understand what you want, and collaborate with you rather than just exist safely in your space. “Like a dance partner,” says Brooks.

Brooks explains that the advantage of this kind of hardware and software is how versatile it is, especially in small manufacturing where work changes all the time. In factories where carts are already used to move stuff from place to place, Carter fits right in. It’s not the same as warehouse fulfillment, but it shares a lot of the same components, and allows Robust.AI to target a wider variety of environments.

To be clear, Robust.AI isn’t going to take on Amazon or Ocado or highly structured warehouses. And the company isn’t taking on moderately structured environments either, where the environment and workflows are stable enough that you don’t have to change things around very much. Robust.AI is targeting “micro-fulfillment” (which means up to about 50,000 square feet) along with manufacturing operations of similar size. “The smallest warehouse we’ve seen is 100 square feet,” Brooks says. “That’s too small for us. But there are lots of warehouses of 10,000, 20,000, 50,000 square feet in the form of fulfillment centers moving into cities, and warehouses of those sizes don’t have automation solutions that are easy to adopt. We’re trying to add automation with very little effort.”

Brooks hopes that minimizing effort could mean that Robust.AI’s system makes sense even for small facilities. Like, facilities that may need only one robot. Being willing to sell a single robot isn’t something that I think many other AMR companies would bother with, simply because the initial deployment makes it really hard to justify something that small. And, to be fair, Robust.AI itself isn’t necessarily looking to do this long-term, says Brooks. “We’re looking for partners who have a big presence in manufacturing and in selling and servicing machines, who want to build and sell these robots with our software stack. That’s how we’ll get to a massive scale much more rapidly. We think this is a fantastic market that we can move into quickly.”

While the pictures and video are a pretty good example of the sort of thing that Robust.AI is looking to deploy, this is just a prototype of Carter, and we’re told that the Grace software stack is also working on other platforms. “We’re talking to a lot of customers,” Brooks says, “and we expect to have trials this year.” And by next year, Robust.AI is planning to have a production line geared up to the point where it’ll be able to get its robots on the market.



Recent advances in soft robotics have opened up possibilities for the construction of smart fibers and textiles that have a variety of mechanical, therapeutic, and wearable possibilities. These fabrics, when programmed to expand or contract through thermal, electric, fluid, or other stimuli, can produce motion, deformation, or force for different functions.

Engineers at the University of New South Wales (UNSW), Sydney, Australia, have developed a new class of fluid-driven smart textiles that can “shape-shift” into 3D structures. Despite recent advances in the development of active textiles, “they are either limited with slow response times due to the requirement of heating and cooling, or difficult to knit, braid, or weave in the case of fluid-driven textiles,” says Thanh Nho Do, senior lecturer at the UNSW’s Graduate School of Biomedical Engineering, who led the study.

To overcome these drawbacks, the UNSW team demonstrated a proof of concept of miniature, fast-responding artificial muscles made up of long, fluid-filled silicone tubes that can be manipulated through hydraulic pressure. The silicone tube is surrounded by an outer helical coil as a constraint layer to keep it from expanding like a balloon. Due to the constraint of the outer layer, only axial elongation is possible, giving muscle the ability to expand under increased hydraulic pressure or contract when pressure is decreased. Using this mechanism, says Do, they can program a wide range of motion by changing the hydraulic pressure.

“A unique feature of our soft muscles compared to others is that we can tune their generated force by varying the stretch ratio of the inner silicone tube at the time they are fabricated, which provides high flexibility for use in specific applications,” Do says.

The researchers used a simple, low-cost fabrication technique, in which a long, thin silicone tube is directly inserted into a hollow microcoil to produce the artificial muscles, with a diameter ranging from a few hundred micrometers to several millimeters. “With this method, we could mass-produce soft artificial muscles at any scale and size—diameter could be down to 0.5 millimeters, and length at least 5 meters,” Do says.

The filament structure of the muscles allows them to be stored in spools and cut to meet specific length requirements. The team used two methods to create smart fibers from the artificial muscles. One was using them as active yarns to braid, weave, or knit into active fabrics using traditional textile-making technologies. The other was by integrating them directly into conventional, passive fabrics.

The combination of hydraulic pressure, fast response times, light weight, small size, and high flexibility makes the UNSW’s smart textiles versatile and programmable. According to Do, the expansion and contraction of their active fabrics is similar to those of human muscle fibers.

This versatility opens up potential applications in soft robotics, including shape-shifting structures, biomimicking soft robots, locomotion robots, and smart garments. There are possibilities for use as medical/therapeutic wearables, as assistive devices for those needing help with movement, and as soft robots to aid the rescue and recovery of people trapped in confined spaces.

Although these artificial muscles are still a proof of concept, Do is optimistic about commercialization in the near future. “We have a Patent Cooperation Treaty application around these technologies,” he says. “We are also working on clinical validation of our technology in collaborations with local clinicians, including smart compression garments, wearable assistive devices, and soft haptic interfaces.”

Meanwhile, the research team continues to work on improvements. “We have currently achieved an outer diameter of 0.5 mm, which we believe is still large compared to the human muscle fibers,” says Do. “[So] one of the main challenges of our technology is how to scale the muscle to a smaller size, let’s say less than 0.1 mm in diameter.”

Another challenge, he adds, relates to the hydraulic source of power, which requires electric wires to connect and drive the muscles. “Our team is working on the integration of a new soft, miniature pump and wireless communication modules that will enable untethered driving systems to make it a smaller and more compact device.”

Analytical modeling for bending actuators is yet another area of improvement. Concomitant studies to demonstrate the feasibility of machine-made smart textiles and washable smart textiles in the smart garment industry are also necessary, the researchers say, as are further studies regarding incorporating functional components into smart textiles to provide additional benefits.



This article is part of our exclusive IEEE Journal Watch series in partnership with IEEE Xplore.

Who is better at performing surgery: an experienced surgeon or a robot?

Typically, surgeons have to make incisions that are relatively big during surgery, whereas the small instruments of a robot can fit through smaller incisions. Given this advantage of robotic systems, it’s now quite common for surgeons to use remote-controlled robotic arms to perform surgery—combining the precision of an experienced human with the minimal invasiveness of a small robotic arm. Nevertheless, the surgeon is controlling the robot in these cases, and a fully automated robotic system that can outperform surgeons in terms of precision is yet to be realized.

A recent advance shows that robots could surpass human performance in the near future, however. In a paper published 10 May in IEEE Transactions on Automation Science and Engineering, a multinational team of researchers reported the results of a study where a robot was able to complete a common surgery training task with the same accuracy as an experienced surgeon, while completing the task more quickly and consistently.

Minho Hwang, an assistant professor at the Daegu Gyeongbuk Institute of Science and Technology, in South Korea, was involved in the study. He notes that many robotic systems currently rely on automated control of cables, which are subjected to friction, cable coupling, and stretch—all of which can make precision positioning difficult.

“When humans control the robots, they can compensate through human visual feedback,” explains Hwang. “But automation of robot-assisted surgery is very difficult due to [these] position errors.”

In their study, Hwang and colleagues took a standard da Vinci robotic system, which is a common model usedd for robot-assisted surgery, and strategically placed 3-D printed markers on its robotic arm. This allowed the team to track its movements using a color and depth sensor. They then analyzed the arm’s movements using a machine-learning algorithm. Results suggest that the trained model can reduce the mean tracking error by 78 percent, from 2.96 millimeters to 0.65 mm.

Next, the researchers put their system to the test against an experienced surgeon who had performed more than 900 surgeries, as well as against nine volunteers with no surgical experience. The study participants were asked to complete a peg transfer task, which is a standardized test for training surgeons that involves transferring six triangular blocks from one side of a pegboard to the other and then back again. While the task sounds simple enough, it requires millimeter precision.

The study participants completed three different variations of the peg task using the da Vinci system: unilateral (using one arm to move a peg), bilateral (using two arms to simultaneously move two pegs) and bilateral with a crossover (using one arm to pick up the peg, transfer it to the other arm, and place the peg on the board). Their robot-assisted performance was compared to that of the fully automated robotic system designed by Hwang’s team.

Hwang et al show how their robot system can outperform a surgeon in a precision training task. In the most difficult variation of the task, called a bilateral handover, the robot must pick up a peg, transfer it to the other robotic arm, and then place the peg back on the peg board, all with millimeter precision. The robot outperforms the surgeon by 31.7 % in mean transfer time.

Using one arm, the surgeon outperformed the automated robot in terms of speed. But in the more complex tasks involving two arms, the robot outperformed the surgeon.

For example, for the most difficult task (bilateral handovers), the surgeon achieved a success rate of 100 percent with a mean transfer time of 7.9 seconds. The robot had the same success rate, but with a mean transfer time of just 6.0 seconds.

“We were very surprised by the robot’s speed and accuracy, as it is very hard to surpass the skill of a trained human surgeon,” says Ken Goldberg, a professor in the Department of Electrical Engineering and Computer Sciences at the University of California, Berkeley, who was also involved in the study. “We were also surprised by how consistent the robot was; it transferred 120 blocks flawlessly without a single failure.”

Goldberg and Hwang note that this is a preliminary study in a controlled environment, and more studies are still needed to achieve fully automated robotic surgery. But as far as they are aware, this is the first instance of a robot outperforming a human in a surgery-related training task.

“We have demonstrated that fast and accurate automation is feasible for one surgical task involving rigid objects of known shape. The next step is to demonstrate this for other tasks and in the much more complex environment of a human body,” says Hwang.

He says that, in future work, the team plans to extend their approach to automating surgical subtasks such as tissue suturing, and that they would like to build upon their methods for calibration, motion planning, visual servoing, and error recovery.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

IEEE CASE 2022: 20–24 August 2022, MEXICO CITYCLAWAR 2022: 12–14 September 2022, AZORES, PORTUGALANA Avatar XPRIZE Finals: 4–5 November 2022, LOS ANGELESCoRL 2022: 14–18 December 2022, AUCKLAND, NEW ZEALAND

Enjoy today’s videos!

GITAI will demonstrate the capabilities of its autonomous robot outside the International Space Station Bishop airlock in 2023, including thermal blanket manipulation and component installation.

[ GITAI ]

Thanks, Sho!

Today we set up a football yard in our office and let two Bittle robot dogs play football [soccer] for the first time. The game was unexpectedly fierce with a lot of shots and combats. The gatekeeper, Sox, also did a great job, though sometimes he was absent-minded. But who can blame a little robot cat?

The programmable robot dogs run on Arduino with the open source quadruped framework OpenCat. You can control robots via mobile/desktop apps and program more advanced movements and tricks in C++ or Python. Unlike Sox, our robot dogs are real and for sale! Learn programming, robotics & AI while making your robot the hero in the playground!

[ Petoi ]

Thanks, Rongzhong!

We humans acquire our body-model as infants, and robots are following suit. A Columbia Engineering team announced today they have created a robot that—for the first time—is able to learn a model of its entire body from scratch, without any human assistance. In a new study published by Science Robotics, the researchers demonstrate how their robot created a kinematic model of itself, and then used its self-model to plan motion, reach goals, and avoid obstacles in a variety of situations. It even automatically recognized and then compensated for damage to its body.

[ Columbia ]

Using biological experiments, robot models, and a geometric theory of locomotion, researchers at the Georgia Institute of Technology investigated how and why intermediate lizard species, with their elongated bodies and short limbs, might use their bodies to move. They uncovered the existence of a previously unknown spectrum of body movements in lizards, revealing a continuum of locomotion dynamics between lizardlike and snakelike movements.

[ Georgia Tech ]

NASA’s VIPER team practiced driving the rover down Griffin’s sizeable ramps to mimic safely egressing onto the lunar surface. Griffin’s ramps were positioned in a range of average and worst-case inclines up to 33 degrees to thoroughly test how VIPER could exit the lander.

[ NASA ]

Existing magnetically actuated soft robots require an external magnetic field to generate motion, limiting them to carefully controlled laboratory settings. Here, we introduce an electromagnetically actuated soft robot that can locomote without an external magnetic field. The robot is designed to carry its own magnet, which it alternately retracts and repels. Friction-biased feet transform this back-and-forth linear motion into forward locomotion, mimicking an earthworm gait.

[ Faboratory ]

Sweet potato packing is surprisingly complex!

[ Soft Robotics ]

Impressions from the RoboCup 2022 Humanoid AdultSize Drop-In Tournament. Team NimbRo from University of Bonn, Germany came in first with 22 points, Team HERoEHS from Korea came in second with 5.5 points and Team RoMeLa, USA came in third.

[ NimbRo ]

Given randomized, known positions of a bowl and a set of utensils (fork, knife, and spoon), the robot’s goal is to "set" the table for a variety of table configurations. To solve this, we use a Task and Motion Planning (TAMP) framework, PDDLStream. We also integrate an orientation constraint, which enforces that the robot keeps a bowl filled with cereal in an upright orientation!

[ CSAIL ]

The best thing about whole body teleoperation of robots is the suit you get to wear while you do it.

[ Inria ]

Shadow has merged our Shadow Dexterous Hand with our lightweight Shadow glove to give you the newest solution for dexterous manipulation and grasping.

[ Shadow Hand ]

Autonomous operation of the K-Max Titan Helicopter enabled by Near Earth Autonomy.

[ Near Earth Autonomy ]

The VoloDrone is our uncrewed, fully electric utility drone capable of carrying an impressive—and unprecedented—payload. While there are many design overlaps with the VoloCity, we created the VoloDrone to offer heavy-lift services to a slew of industries, and it will be deployed where classic transportation modes reach their limits.

[ Volodrone ]

Welcome to our new video series where we showcase some of the unique and interesting robots that come out of the Clearpath integration shop. In our first episode, we have a fully-loaded and autonomous Husky UGV, equipped with a Universal Robots UR5 arm, our Outdoor Navigation software and Indoor Navigation software, along with all the supporting sensors.

[ Clearpath ]

"ROBOTICS: The Future is Now" William Shatner (of Star Trek) talks about the Technology of Robotics in this rarely seen 1984 science documentary.

[ CHAP ]

This video is...strange. Like, this is absolutely not a new idea, some interesting choices have been made on the hardware side, and I’m not at all sure what to think about the software side.

Plus, the website has red-flag level hype. Hmm.

[ Giant AI ]

Hod Lipson’s MARS 2022 talk on building self-aware machines, where he describes some of his past and current work towards sentient machines.

Here’s another video from Hod Lipson’s YouTube channel about the Golem project, which is over 20 years old and is still cool!

[ Creative Machines Lab ]

Kathleen McDermott will discuss her practice of hacking, DIYing, and crafting with consumer and hobbyist electronics, and demonstrate the experiments with kinematics she has developed with mentorship from the Kod Lab. Central to McDermott’s work, is the idea that absurdity and play can be useful ways to reframe our relationships to technology and productivity.

[ GRASP Lab ]



In order to better understand how people will interact with mobile robots in the wild, we need to take them out of the lab and deploy them in the real world. But this isn’t easy to do.

Roboticists tend to develop robots under the assumption that they’ll know exactly where their robots are at any given time—clearly that’s an important capability if the robot’s job is to usefully move between specific locations. But that ability to localize generally requires the robot to have powerful sensors and a map of its environment. There are ways to wriggle out of some of these requirements: If you don’t have a map, there are methods that build a map and localize at the same time, and if you don’t have a good range sensor, visual navigation methods use just a regular RGB camera, which most robots would have anyway. Unfortunately, these alternatives to traditional localization-based navigation are either computationally expensive, not very robust, or both.

We ran into this problem when we wanted to deploy our Kuri mobile social robot in the halls of our building for a user study. Kuri’s lidar sensor can’t see far enough to identify its location on a map, and its onboard computer is too weak for visual navigation. After some thought, we realized that for the purposes of our deployment, we didn’t actually need Kuri to know exactly where it was most of the time. We did need Kuri to return to its charger when it got low on battery, but this would be infrequent enough that a person could help with that if necessary. We decided that perhaps we could achieve what we wanted by just letting Kuri abandon exact localization, and wander.

Robotic Wandering

If you’ve seen an older-model robot vacuum cleaner doing its thing, you’re already familiar with what wandering looks like: The robot drives in one direction until it can’t anymore, maybe because it senses a wall or because it bumps into an obstacle, and then it turns in a different direction and keeps going. If the robot does this for long enough, it’s statistically very likely to cover the whole floor, probably multiple times. Newer and fancier robot vacuums can make a map and clean more systematically and efficiently, but these tend to be more expensive.

You can think of a wandering behavior as consisting of three parts:

  1. Moving in a straight line
  2. Detecting event(s) that trigger the selection of a new direction
  3. A method that’s used to select a new direction

Many possible wandering behaviors turn out not to work very well. For example, we found that having the robot move a few meters before selecting a new direction at random led it to get stuck moving back and forth in long corridors. The curve of the corridors meant that simply waiting for the robot to collide before selecting a new direction quickly devolved into the robot bouncing between the walls. We explored variations using odometry information to bias direction selection, but these didn’t help because the robot’s estimate of its own heading—which was poor to begin with—would degrade every time the robot turned.

In the end, we found that a preference for moving in the same direction as long as possible—a strategy we call informed direction selection—was most effective at making Kuri roam the long, wide corridors of our building.

Informed direction selection uses a local costmap—a small, continuously updating map of the area around the robot—to pick the direction that is easiest for the robot to travel in, breaking ties in preference for directions that are closer to the previously selected direction. The resulting behavior can look like a wave; the robot commits to a direction, but eventually an obstacle comes into view on the costmap and the local controller starts to turn the robot slightly to “get around it.” If it were a small obstruction, like a person walking by, the robot would circumnavigate and continue in roughly the original direction, but in the case of large obstacles like walls, the local controller will eventually detect that it has drifted too far from the original linear plan and give up. Informed direction selection will kick in and trace lines through the costmap to find the most similar heading that goes through free space. Typically, this will be the line that moves along and slightly away from the wall.

Our wandering behavior is more complicated than something like always choosing to turn 90 degrees without considering any other context, but it’s much simpler than any approach that involves localization, since the robot just needs to be able to perceive obstacles in its immediate vicinity and keep track of roughly which direction it’s traveling in. Both of these capabilities are quite accessible, as there are implementations in core ROS packages that do the heavy lifting, even for basic range sensors and noisy inertial measurement units and wheel encoders.

Like more intelligent autonomous-navigation approaches, wandering does sometimes go wrong. Kuri’s lidar has a hard time seeing dark surfaces, so it would occasionally wedge itself against them. We use the same kinds of recovery behaviors that are common in other systems, detecting when the robot hasn’t moved (or hasn’t moved enough) for a certain duration, then attempting to rotate in place or move backward. We found it important to tune our recovery behaviors to unstick the robot from the hazards particular to our building. In our first rounds of testing, the robot would reliably get trapped with one tread dangling off a cliff that ran along a walkway. We were typically able to get the robot out via teleoperation, so we encoded a sequence of velocity commands that would rotate the robot back and forth to reengage the tread as a last-resort recovery. This type of domain-specific customization is likely necessary to fine-tune wandering behaviors for a new location.

Other types of failures are harder to deal with. During testing, we occasionally ran the robot on a different floor, which had tables and chairs with thin, metallic legs. Kuri’s lidar couldn’t see these reliably and would sometimes “clothesline” itself with the seat of the chair, tilting back enough to lose traction. No combination of commands could recover the robot from this state, so adding a tilt-detection safety behavior based on the robot’s cliff sensors would’ve been critical if we had wanted to deploy on this floor.

Using Human Help

Eventually, Kuri needs to get to a charger, and wandering isn’t an effective way of making that happen. Fortunately, it’s easy for a human to help. We built some chatbot software that the robot used to ping a remote helper when its battery was low. Kuri is small and light, so we opted to have the helper carry the robot back to its charger, but one could imagine giving a remote helper a teleoperation interface and letting them drive the robot back instead.

Kuri was able to navigate all 350 meters of hallway on this floor, which took it 32 hours in total.

We deployed this system for four days in our building. Kuri was able to navigate all 350 meters of hallway on the floor, and ran for 32 hours total. Each of the 12 times Kuri needed to charge, the system notified its designated helper, and they found the robot and placed it on its charger. The robot’s recovery behaviors kept it from getting stuck most of the time, but the helper needed to manually rescue it four times when it got wedged near a difficult-to-perceive banister.

Wandering with human help enabled us to run an exploratory user study on remote interactions with a building photographer robot that wouldn’t have been possible otherwise. The system required around half an hour of the helper’s time over the course of its 32-hour deployment. A well-tuned autonomous navigation system could have done it with no human intervention at all, but we would have had to spend a far greater amount of engineering time to get such a system to work that well. The only other real alternative would have been to fully teleoperate the robot, a logistical impossibility for us.

To Wander, or Not to Wander?

It’s important to think about the appropriate level of autonomy for whatever it is you want a robot to do. There’s a wide spectrum between “autonomous” and “teleoperated,” and a solution in the middle may help you get farther along another dimension that you care more about, like cost or generality. This can be an unfashionable suggestion to robotics researchers (for whom less-than-autonomous solutions can feel like defeat), but it’s better to think of it as an invitation for creativity: What new angles could you explore if you started from an 80 percent autonomy solution rather than a fully autonomous solution? Would you be able to run a system for longer, or in a place you couldn’t before? How could you sprinkle in human assistance to bridge the gap?

We think that wandering with human help is a particularly effective approach in some scenarios that are especially interesting to human-robot interaction researchers, including:

  • Studying human perceptions of robots
  • Studying how robots should interact with and engage bystanders
  • Studying how robots can interact with remote users and operators

You obviously wouldn’t want to build a commercial mail-courier robot using wandering, but it’s certainly possible to use wandering to start studying some of the problems these robots will face. And you’ll even be able to do it with expressive and engaging platforms like Kuri (give our code a shot!), which wouldn’t be up for the task otherwise. Even if wandering isn’t a good fit for your specific use case, we hope you’ll still carry the mind-set with you—that simple solutions can go a long way if you budget just a touch of human assistance into your system design.

Nick Walker researches how humans and robots communicate with one another, with an eye toward future home and workplace robots. While he was a Ph.D. student at the University of Washington, he worked on both implicit communication—a robot’s motion, for instance—and explicit communication, such as natural-language commands.

Amal Nanavati does research in human-robot interaction and assistive technologies. His past projects have included developing a robotic arm to feed people with mobility impairments, developing a mobile robot to guide people who are blind, and cocreating speech-therapy games for and with a school for the deaf in India. Beyond his research at the University of Washington, Amal is an activist and executive board member of UAW 4121.



Cruise, the San-Francisco–based designer and operator of all-electric self-driving cars, employs nearly 2,000 engineers, including somewhere between 300 and 900 engineers with Ph.D. degrees. They work in hardware and software. They specialize in AI, security, and safety. And though, indeed, some have robotics, automation, or automotive backgrounds, many don’t. Instead, they come from an incredibly long list of different technical fields—e-commerce, finance, game development, animation, cameras, semiconductors, and app development.

Here’s what Mohamed Elshenawy, Cruise’s executive vice president of engineering, told IEEE Spectrum about the company’s workforce. (Elshenawy himself came to Cruise from stints as chief technology officer at a financial services startup and leader of a technology team at Amazon.)

IEEE Spectrum: Let’s start with the big picture of your engineering team.

Mohamed Elshenawy: In AI, we have machine-learning (ML) engineers that build the on-the-road decision-making modules. We also have the engineers that help build the tools for our continuous-learning machine. We have data engineers and data scientists, and even UI folks that help with the tools that the main core ML engineers use.

In robotics, we have AV foundations engineers, who build and maintain our robotics operating systems, and embedded systems developers.

In our security and safety operations, we have engineers working on threat modeling, app security engineering, IT enterprise security, and the security of the vehicle itself, along with all the systems engineers responsible for test validation, generating test-case scenarios, and tracking it all.

Our hardware engineers build our own EVs from the ground up, in partnership with GM and Honda. They handle the definition, design, development, and production of sensors, compute modules, and related hardware and include about 50 engineering disciplines, including acoustic engineers, power engineers, system-on-chips people, and so on.

Finally, we have the product engineering team—that covers both user-facing and internal apps—and the infrastructure team which builds the technical foundations (cloud infrastructure, internal tools, etc.) that the rest of our engineers rely on every day to get work done.


Where do these engineers generally come from?

Elshenawy: Many of our AI engineers come from academia, consumer tech, and finance. Our simulation group is part of this team. They include several engineers that helped build The Sims and others that worked for gaming and animation studios like Pixar, LucasArts, and Ubisoft. Our hardware engineers include people who came from Kodak, JPL, and chipmakers, as well as academia. For robotics, we generally look to robotics, other automotive, and aerospace companies for hiring. On the security side, we hire a mix of researchers and practitioners—and even literal hackers, including the team that infamously hacked a Jeep. Our product and infrastructure engineers come from a variety of traditional engineering companies as well as startups; they’ve previously built videoconferencing software, cloud computing platforms, and even a meditation app.

How has the mix shifted over the years?

Elshenawy: We’re solving for a problem that is mainly rooted in general AI, so we’ve always been heavy on the AI side of things. Because we are cloud native, we are able to leverage a lot of the existing technology that is provided by cloud providers, such as Google Cloud Platform and Azure; we don’t need to reinvent that. So we’re leaning more heavily towards AI, robotics, and hardware over time.

Also, we’re ramping up product engineering now that we have started charging members of the public to ride in fully driverless cars. We expect that to continue to grow there as we solve for the technology and expand to multiple cities in the United States and internationally.

You mean people working on the customer-facing app?

Elshenawy: That and a lot more. This includes the customer-facing app that lets you order a ride, the in-car experience, and all the fleet operation on the back end, where we control our fleet, placing our fleet ahead of demand and determining pricing, and all the services that keep these lights on.


What do your engineers have in common?

Elshenawy: The self-driving cars problem is a great AI problem, and some people think about it as essentially a research problem, because you’re pushing the state of the art. But when you think about how you are actually going to pragmatically ship something continuously, it’s all rooted in experimentation.

So one common factor that we find in the engineers who are successful here, regardless of where in our organization that they land, is having the experimentation mind-set, the humble, resilient mind-set of someone who’s continuously curious and very agile in nature. There are certain types of engineers that don’t deal well with uncertainty and experimentation, and there are other engineers who thrive under an environment of continuous learning.

So the common trait among all these engineers is that learning, curiosity, and experimentation mind-set, and having the agility to deal with an unknown problem like this one.

What are the hardest roles to fill right now?

Elshenawy: Software, in general, has a shortage and will always have less supply than demand in the coming years, particularly in AI, applied machine learning, and deep learning, and we will continue to need these engineers in many different areas as we grow. Robotics specializations, and in general, control theory, will be very important as we go forward and those areas will continually face high demand.



In 2016, we wrote about Cargo Sous Terrain, a (then) US $3.4 billion concept for underground cargo tubes full of automated delivery carts whisking goods between cities and logistics centers across Switzerland at 30 kilometers per hour. The idea behind CST is to provide for long-term freight transport without relying on expansion of road and rail networks, which are already stuffed with both freight and passengers and don’t have much room to grow.

Like so many concepts of this kind, six years ago it seemed like it was highly unlikely to ever happen. However, this past December, the Swiss parliament passed the necessary legal framework to enable underground freight transportation, meaning that the CST project can commence on August 1st.

Cargo sous terrain follows a similar principle to that of an automatic conveyor system. Automated, driverless transport vehicles which are able to pick up and deposit loads automatically from the designated ramps and lifts travel around the clock in the tunnels.

The vehicles, which travel on wheels and have an electric drive with induction rails, operate in three-track tunnels with a constant speed of around 30 kilometers per hour. The goods are transported on pallets or in modified containers. Thanks to refrigeration-compatible transport vehicles, the transport of fresh and chilled goods is also possible. Attached to the roof of the tunnel is a rapid overhead track for smaller goods packages.

It will for the first time be economically viable to transport small volumes on individual pallets or containers on an ongoing basis. The continuous flow of small-component goods obviates the need for waiting times at transfer stations. In addition, the space requirement can be massively reduced because temporary goods storage is no longer necessary.

Freight honestly seems like a much better use of small-scale underground transportation systems than passenger services, which have been the focus of a lot of recent speculation. Freight tunnels can be smaller and can operate at slower speeds than would be demanded by passengers, and since the entire system is autonomous, comfort (and to some extent safety) can be less of a focus.

The cost (to be covered entirely by the private sector) has increased from the initial 2016 estimate of $3.5 billion—CST is now estimated to cost between $30 and $35 billion for the full 500-kilometer network, although about $3 billion should be sufficient for the first phase, a 70-km section with 10 hubs that connects Zurich with a logistics center in Härkingen-Niederbipp to the west. The Swiss seem mostly unfazed by the scale of the project, and still consider it to be a good long-term investment. CST has partnered with a bunch of large Swiss logistics companies and retailers who see this solution as a complement to existing road and rail infrastructure. The entire project will run on renewable energy, and CST expects the amount of heavy trucks on roads to be reduced by up to 40 percent as its underground vehicles take over transport.

With tangible planning measures soon to be underway for the initial line, other Swiss cantons are taking note. To the east of Zurich, St. Gallen and Thurgau are suggesting that an underground CST connection is “technically and economically realistic” and have begun determining possible hub locations. All of this is a far cry from actually digging a hole and running autonomous cargo pods through it, but it’s a lot closer to reality now than it was in 2016.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

RoboCup 2022: 11 July–17 July 2022, BANGKOKIEEE CASE 2022: 20 August–24 August 2022, MEXICO CITYCLAWAR 2022: 12 September–14 September 2022, AZORES, PORTUGALANA Avatar XPRIZE Finals: 4 November–5 November 2022, LOS ANGELESCoRL 2022: 14 December–18 December 2022, AUCKLAND, NEW ZEALAND

Enjoy today's videos!

Over the past few months the team at Everyday Robots, alongside Artist-in-Residence Catie Cuan, have been working on an experiment that transforms the robots from everyday physical tools to musical instruments. We map the joint velocities of the robot onto musical tracks, so the robot makes music while it moves—something akin to a “music mode.”

[ Everyday Robots ]

The ROS-Industrial open-source project reached the 10-year mark, and to celebrate, we reached out to the community to share snippets of the great work they have been doing. ROS-I seeks to extend ROS and now ROS 2 to industrially relevant hardware and applications, and the community has been a key part in realizing the successes to date. Thanks to all those that submitted, and we look forward to more success stories in the years to come!

[ ROS-I ]

Planning minimum-time trajectories in cluttered environments with obstacles is a challenging problem. The presented method achieves a 100 percent success rate of flying minimum-time policies without collision, while traditional planning and control approaches achieve only 40 percent. We show the approach in real-world flight with speeds reaching 42 kilometers per hour and acceleration up to 3.6 g.

[ UZH RPG ]

This work tackles the problem of robots collaboratively towing a load with cables to a specified goal location while avoiding collisions in real time. The introduction of cables (as opposed to rigid links) enables the robotic team to travel through narrow spaces by changing its intrinsic dimensions through slack/taut switches of the cable.

[ UC Berkeley ]

After analyzing data gathered when NASA’s OSIRIS-REx spacecraft collected a sample from asteroid Bennu in October 2020, scientists have learned something astonishing: The spacecraft would have sunk into Bennu had it not fired its thrusters to back away immediately after it grabbed dust and rock from the asteroid’s surface. It turns out that the particles making up Bennu’s exterior are so loosely packed and lightly bound to each other that if a person were to step onto Bennu they would feel very little resistance, as if stepping into a pit of plastic balls that are popular play areas for kids.

[ NASA ]

Nicolas Halftermeyer shared this video of a bunch of the cool robots he saw at Automatica 2022 in Germany.

[ Twitter ]

Thanks, Nicolas!

ESA’s four-wheeled, two-armed Interact rover was built by the agency's Human Robot Interaction Lab and modified for the rugged slopes of the volcano. This robot formed part of a team consisting of two DLR rovers—Lightweight Rover Units 1 and 2—along with a fixed ‘lunar’ lander supplying Wi-Fi and power to the rovers, plus a drone for surface mapping. The Karlsruhe Institute of Technology contributed the centipede-like Scout crawler, optimized for tough terrain, which could also serve as a relay between Interact and the lander, boosting its effective area of operations.

[ ESA ]

Building a 159-kilogram (350-pound) autonomous robot designed to go to very dark and tough places takes a while, even in time lapse.

[ WVUIRL ]

Kawasaki Robotics’ Astorino is a six-axis robot arm designed for education. Its structure is 3D printed and easy to repair, and it’s supposedly cheap, although I don’t know how cheap.

[ Kawasaki ]

This video shows uninterrupted operation of ANYmal Bull. The robot performs the following tasks: turning a wheel, pulling a lever, pushing a gate, and pulling a rope to lift a bucket. The robot walks around and up the ramp teleoperated via a remote controller. However, the robot solves each of the manipulation tasks autonomously.

[ SLMC ]

Thanks, Henrique!

Wing is leaving a lot of stuff out in this short video that throws shade at using cars for food delivery, namely that car infrastructure already exists and is useful for all kinds of other things, whereas Wing itself represents a massive amount of dedicated resource investment.

[ Wing ]

Maciej Antonik, Edu4Industry CEO, shares one of his customer success stories, showing that Quanser Autonomous Vehicles Studio is a viable solution that allows students to quickly test and learn pretty complex concepts in robotics.

[ Quanser ]

Global robot scientist Dennis Hong explores a future where humanoids, built to mirror human intellectual and locomotive capabilities, take on repetitive, scaled, and even dangerous tasks.

[ RoMeLa ]



Watching robots operate with speed and precision is always impressive, if not, at this point, always surprising. Sophisticated sensors and fast computing means that a powerful and agile robot, like a drone, that knows exactly where it is and exactly where it’s going can reliably move in highly dynamic ways. This is not to say that it’s easy for the drone, but if you’ve got a nice external localization system, a powerful off-board computer, and a talented team of roboticists, you can perform some amazingly agile high-speed maneuvers that most humans could never hope to match.

I say "most" humans, because there are some exceptionally talented humans that are, in fact, able to achieve a similar level of performance to even the fastest and most agile drones. The sport of FPV (first person view) drone racing tests what’s possible with absurdly powerful drones in the hands of humans who must navigate complex courses with speed and precision that seems like it shouldn’t be possible, all while relying solely on a video feed sent from a camera on the front of the drone to the pilot’s VR headset. It’s honestly astonishing to watch.

A year ago, autonomous racing quadrotors from Davide Scaramuzza’s Robotics and Perception Group at the University of Zurich proved that they could beat the world’s fastest humans in a drone race. However, the drones relied on a motion capture system to provide very high resolution position information in real-time, along with a computer sending control information from the safety and comfort of a nearby desk, which doesn’t really seem like a fair competition.

Earlier this month, a trio of champion drone racers traveled to Zurich for a rematch, but this time, the race would be fair: no motion capture system. Nothing off-board. Just drones and humans using their own vision systems and their own computers (or brains) to fly around a drone racing track as fast as possible.

To understand what kind of a challenge this is, it’s important to have some context for the level of speed and agility. So here’s a video of one of UZH’s racing drones completing three laps of a track using the motion capture system and off-board computation. This particular demo isn’t "fair," but it does give an indication of what peak performance of a racing drone looks like, with a reaction from one of the professional human pilots (Thomas Bitmatta) at the end:

As Thomas says at the end of the video, the autonomous drone made it through one lap of the course in 5.3 seconds. With a peak speed of 110 km/h, this was a staggering 1.8 seconds per lap faster than Thomas, who has twice won FPV drone racing’s MultiGP International World Cup.

The autonomous drone has several advantages in this particular race. First, it has near-perfect state estimation, thanks to a motion capture system that covers the entire course. In other words, the drone always knows exactly where it is, as well as its precise speed and orientation. Experienced human pilots develop an intuition for estimating the state of their system, but they can’t even watch their own drone while racing since they’re immersed in the first-person view the entire time. The second advantage that the autonomous drone has is that it’s able to compute a trajectory that traverses the course in a time-optimal way, considering the course layout and the constraints imposed by the drone itself. Human pilots have to practice on a course for hours (or even days) to discover what they think is an optimal trajectory, but they have no way of knowing for sure whether their racing lines can be improved or not.

Elia Kaufmann prepares one of UZH's vision-based racing drones on its launch platform.Evan Ackerman/IEEE Spectrum

So what, then, would make for a drone race in which humans and robots can compete fairly, but that also doesn’t ask the robots to be less robotic or the humans to be less human-y?

  • No external help. No motion capture system or off-board compute. Arguably, the humans have something of an advantage here, since they are off-board by definition, but the broader point of this research is to endow drones with the ability to fly themselves in aggressive and agile ways, so it’s a necessary compromise.
  • Complete knowledge of the course. Nothing on the course is secret, and humans can walk through it and develop a mental model. The robotic system, meanwhile, gets an actual CAD model. Both humans and robots also get practice time, humans on the physical course with real drones, and the system practices in simulation. Both humans and robots can use this practice time to find an optimal trajectory in advance.
  • Vision only. The autonomous drones use Intel RealSense stereo vision sensors, while the humans use a monocular camera streaming video from the drone. The humans may not get a stereo feed, but they do get better resolution and higher frames per second than the RealSense gives the autonomous drone.

Three world-class human pilots were invited to Zurich for this race. Along with Thomas Bitmatta, UZH hosted Alex Vanover (2019 Drone Racing League champion) and Marvin Schäpper (2021 Swiss Drone League champion). Each pilot had as much time as they wanted on the course in advance, flying more than 700 practice laps in total. And on a Friday night in a military aircraft hangar outside of Zurich, the races began. Here are some preliminary clips from one of the vision-based autonomous drones flying computer-to-head with a human; the human-piloted drone is red, while the autonomous drone is blue:

With a top speed of 80 km/h, the vision-based autonomous drone outraced the fastest human by 0.5 second during a three-lap race, where just one or two tenths of a second is frequently the difference between a win and a loss. This victory for the vision-based autonomous drone is a big deal, as Davide Scaramuzza explains:

This demonstrates that AI-vs-human drone racing has the potential to revolutionize drone racing as a sport. What’s clear is that superhuman performance with AI drones can be achieved, but there is still a lot of work to be done to robustify these AI systems to bring them from a controlled environment to real-world applications. More details will be given in follow-up scientific publications.

I was at this event in Zurich, and I’d love to tell you more about it. I will tell you more about it, but as Davide says, the UZH researchers are working on publishing their results, meaning that all the fascinating details about exactly what happened and why will have to wait a bit until they’ve got everything properly written up. So stay tuned, we’ll have lots more for you on this.

The University of Zurich provided travel support to assist us with covering this event in person.



Before the robots arrived, surgical training was done the same way for nearly a century.

During routine surgeries, trainees worked with nurses, anesthesiologists, and scrub technicians to position and sedate the patient, while also preparing the surgical field with instruments and lights. In many cases, the trainee then made the incision, cauterized blood vessels to prevent blood loss, and positioned clamps to expose the organ or area of interest. That’s often when the surgeon arrived, scrubbed in, and took charge. But operations typically required four hands, so the trainee assisted the senior surgeon by suctioning blood and moving tissue, gradually taking the lead role as he or she gained experience. When the main surgical task was accomplished, the surgeon scrubbed out and left to do the paperwork. The trainee then did whatever stitching, stapling, or gluing was necessary to make the patient whole again.

In that old system, trainees were in charge for several hours of each procedure. It wasn’t much different for laparoscopic surgery (sometimes called “minimally invasive surgery”), in which tools and cameras are put into the patient via tiny slits. In those surgeries, trainees did much of the preliminary work and cleanup as well. This system of master-apprentice cooperation was so entrenched that hours spent in the operating room (OR) are still seen as a proxy for skill development.

That’s not working in robotic surgery. Surgical robots have become increasingly prevalent in hospitals ever since the da Vinci Surgical System was approved by the U.S. Food and Drug Administration in 2000. The da Vinci robot, from the Silicon Valley–based company Intuitive Surgical, dominates the market today. Intuitive has more than 6,700 machines in hospitals around the world, and the company says that in the United States, da Vinci machines are used in 100 percent of top-rated hospitals for cancer, urology, gynecology, and gastroenterology diseases. There are also a variety of specialized robotic systems from other companies that are used in fields such as orthopedics, neurology, and ophthalmology.

In robotic surgeries, the most dangerous times are at the beginning and the end, when the surgical team “docks” the massive robot to the patient. For the current generation of da Vinci systems, that means positioning four robotic arms tipped with surgical tools and creating “ports” for those tools by inserting metal cylinders into the patient’s abdomen via small incisions. The first port allows the entry of the camera; the ports are used for scalpels, graspers, cauterizing instruments, staplers, or other tools.

Once the robotic arms are in place and instruments are inserted, the surgeon “scrubs out” and takes up position perhaps 15 feet away from the patient in the immersive da Vinci control console, which provides a stereoscopic view. The surgeon’s hands are on two multipurpose controllers that can move and rotate the instruments in all directions; by switching between instruments, the surgeon’s two hands can easily control all four robotic arms.

The da Vinci Surgical System has four arms tipped with exchangeable surgical tools. One arm typically inserts the camera while others insert tools such as scalpels, graspers, cauterizing instruments, and staplers.Spencer Lowell

And the trainee… well, the trainee gets to watch from another console, if there is one. While the lead surgeon could theoretically give the trainee one of the robot arms to control, in practice it never happens. And surgeons are reluctant to give the trainee control over all the arms because they know that will make the procedure take longer, and the risk to the patient goes up nonlinearly with elapsed time under anesthesia.

I began researching the impact of surgical robots on surgical technique and education in 2013. My studies have found that hospitals that adopted the technology have most often turned trainees into optional assistants in the OR, meaning that they begin practicing as “real” surgeons without enough skill. Reversing this trend would require sweeping institutional change, which I don’t expect to happen anytime soon. So, I’m working with collaborators on an alternate solution for surgical skill learning. The platform we create could turn out to be broadly useful, perhaps even turning into a blueprint for 21st-century apprenticeship.

Surgical robots are marvels of engineering in many ways. The da Vinci system gives surgeons a magnified view and robotic hands that never shake, enabling very precise surgical maneuvers. It also provides more efficient and intuitive control than surgeons get from laparoscopic tools: Those operate on fulcrums, so moving a hand to the left moves the tool to the right. The da Vinci robot also provides haptic feedback, with earlier models vibrating the controllers if the software detected instrument “clashes,” and more recent models providing similar feedback when surgeons move too quickly or operate out of the visual field. And the ergonomic consoles are certainly easier on surgeons’ bodies; they no longer have to hunch over an operating table for hours at a time. The robots have also been a marketing phenomenon that has led to a robotic-surgery arms race, with mid-tier hospitals advertising their high-tech capabilities.

Many people assume that patient outcomes must be better with robotic surgery. It’s not obvious that’s true. In fact, a recent survey of 50 randomized control trials that compared robotic surgery to conventional and laparoscopic surgeries found that outcomes were comparable, and robotic surgeries were actually a bit slower. From my perspective, focusing on education, it’s something of a miracle that outcomes aren’t worse, given that residents are going to their first jobs without the necessary experience. It may be that the outcomes of inexperienced junior surgeons are counterbalanced by those of senior surgeons—or it may be that junior surgeons are really learning on their first patients “in the wild,” which is a somewhat uncomfortable idea. This is a hot research area, so we should know more soon.

At a da Vinci surgical console, a surgeon can use the two multipurpose controllers to manipulate the four robotic arms. The surgeon has a stereoscopic and magnified view of the surgical area.Top: Spencer Lowell; bottom: Universal Images Group/Getty Images

It may seem counterintuitive that surgical trainees need more training time. To become a surgeon, a person must first spend four years in medical school and then at least five years in a residency program. Medical residents are famously overworked and sleep-deprived, to the extent that the United States passed regulations in 2003 limiting their workweek to 80 hours. But although surgical residents spend many hours in the OR, my findings show that those hours aren’t giving them the skills they need. And because they’re always racing from one patient-related task to the next, they spent almost no time on simulator programs, though they are available. The last time I checked on this situation, about a year ago, most hospitals mandated that residents spend about four hours per year on simulators. That’s like asking someone to play a video game for four hours per year to prepare for a life-or-death situation.

In many ways, the issues arising in robotic surgery mirror those confronted by other professions as they have come to rely increasingly on automation. The situation is summed up as the “ automation paradox”: The more advanced and reliable the automated system, the more crucial the contributions of the human operator. That’s because the system will inevitably encounter unexpected circumstances that fall outside its design parameters or will fail in some way. In those rare but critical moments, the operator must detect the failure and take over, quickly bringing the very human faculties of creativity and problem solving to bear on a tricky situation. Airline pilots became familiar with this issue as autopilot became ubiquitous, and the promise of self-driving cars is bringing this conversation to the general public. Surgical robots have quite limited autonomy at this point, so the surgical profession should learn from these examples and act now, changing the human-machine relationship to both preserve surgical skill and avert tragic crashes in the OR.

My conclusions come from two years spent studying the impact of robots on surgical training. I spent a great deal of time at five hospitals, observing 94 surgeries that took a total of 478 hours. I next conducted interviews at 13 more top-tier teaching hospitals around the United States, gathering information from senior surgeons and sets of trainees that the surgeons deemed high-performing or average. The paper I published in 2019 summarized my findings, which were dismaying. The small subset of trainees who succeeded in learning the skills of robotic surgery did so for one of three reasons: They specialized in robotics at the expense of everything else, they spent any spare minutes doing simulator programs and watching YouTube videos, or they ended up in situations where they performed surgeries with little supervision, struggling with procedures that were at the edge of their capabilities. I call all these practices “shadow learning,” as they all bucked the norms of medical education to some extent. I’ll explain each tactic in more detail.

Residents who engaged in “premature specialization” would begin, often in medical school and sometimes earlier, to give short shrift to other subjects or their personal lives so they could get robotics experience. Often, they sought out research projects or found mentors who would give them access. Losing out on generalist education about medicine or surgery may have repercussions for trainees. Most obviously, there are situations where surgeons must turn off the robots and open up the patient for a hands-on approach. That situation almost never occurs because of a robotic failure; it’s more likely to occur if something goes wrong during the robotic procedure. If the surgeon accidently nicks a vein or cuts through a tumor in a way that causes a leakage of cancerous cells, the recovery mode is to undock the robot rapidly, cut the patient open, and fix the problem the old-fashioned way. My data strongly suggest that residents who prematurely specialize in robotics will not be adequately prepared to handle such situations.

The robots are a marketing phenomenon that has led to a robotic-surgery arms race, with mid-tier hospitals advertising their high-tech capabilities.

The second practice of successful trainees was abstract rehearsal, spending their spare moments in simulators and carefully reviewing surgical videos. One resident told me that he watched a one-hour video of a certain procedure perhaps 200 times to understand every part of it. But passively watching videos only helped so much. Many recordings had been made public because they were particularly good examples of a procedure, for example. In other words, they were procedures where nothing went wrong.

Practicing on the simulator was helpful for trainees, giving them fluency in the basics of robotic control that might impress a senior surgeon in the OR and cause the trainee to get more time on the console. But in the case of the da Vinci system, the simulator software was often only available via the real console, so residents could only practice with it when an OR was empty—which typically meant staying at the hospital into the evening. A few elite institutions had simulation centers, but these were often some distance from the hospital. Most residents didn’t shirk other responsibilities to make the time for such dedicated practice.

An additional drawback of the simulators, some senior surgeons told me, was that they don’t include enough examples of the myriad and compounding ways in which things can go wrong during surgery. Even the best surgeons make errors, but they recover from them: For example, a surgeon might accidentally nick a small blood vessel with a scalpel but quickly seal the cut and move on. In surgery and many other occupations, one of the most important things that trainees need to learn is how to make errors and recover from them.

The final practice of successful trainees was finding situations in which they were able to operate on a patient with little supervision, often working near the edge of their competency and often in violation of hospital policies. Some were working under “superstar” surgeons who were officially in charge of several simultaneous procedures, for example. In such cases, the expert would swoop in only for the trickiest part of each operation. Others rotated from high-status hospitals to departments or hospitals that had relatively little experience with robotic surgery, making the trainees seem competent and trustworthy. Middle-tier hospitals also put less pressure on surgeons to get procedures done quickly, so handing control to a trainee, which inevitably slows things down, was seen as more acceptable. Residents in all these situations were often tense and nervous, they told me, but their struggle was the source of their learning.

To change this situation in a systematic way would require overhauling surgical residency programs, which doesn’t seem likely to happen anytime soon. So, what else can be done?

6,700

Intuitive has more than 6,700 machines in hospitals around the world; in the United States, Intuitive says that da Vinci machines are used in 100 percent of top-rated hospitals for cancer, urology, gynecology, and gastroenterology diseases.

In the past five years, there has been an explosion of apps and programs that enable digital rehearsal for surgical training (including both robotic techniques and others). Some, like Level EX and Orthobullets, offer quick games to learn anatomy or basic surgical moves. Others take an immersive approach, leveraging recent developments in virtual reality like the Oculus headset. One such VR system is Osso VR, which offers a curriculum of clinically accurate procedures that a trainee can practice in any location with a headset and Wi-Fi.

I’m working on something different: a collaborative learning process for surgical skill that I hope could be analogous to GitHub, the platform for hosting open-source software. On GitHub, a developer can post code, and others can build on it, sometimes disagreeing about the best way forward and creating branching paths. My collaborator Juho Kim and I are in the early stages of building a crowdsourced repository for annotated and annotatable surgical videos, not only eliminating the time required to search for useful videos on YouTube but also giving watchers a way to interact with the video and increase their active learning. Thankfully, we have a superb industry collaborator as well: the Michigan Urological Surgery Improvement Collaborative. They curate an open library of robotic urologic surgical videos that is known worldwide.

One somewhat similar platform exists for video-based learning: the C-SATS platform, which is now a subsidiary of Johnson & Johnson. That subscription-based platform enables surgeons to securely upload their own videos and uses AI to scrub out all personally identifying information, such as images of a patient’s face. It then gives surgeons personalized feedback on their performance.

If C-SATS is the Encyclopedia Britannica, we’ll be Wikipedia. We’re currently testing an alpha version of our free and open-source platform, which we call Surch. Recently, we’ve been testing an alpha version with groups of surgeons and residents at select top-tier teaching hospitals to determine which features would be the most valuable to them. We’ve asked testers to complete tasks they typically struggle with: finding good quality surgical videos that match their learning objectives, processing videos actively by making notes on things like surgical phases and anatomy, and sharing those notes with others for feedback. It’s still an academic project, but based on the enthusiastic response we’ve gotten from testers, there seems to be demand for a commercial product. We may try to embed it in a surgical residency program for a year to test the platform further.

I believe that we need a 21st-century infrastructure for apprenticeship.

I believe that we need a 21st-century infrastructure for apprenticeship. The problems I found in robotic skill development have arisen because surgeons are relying on an apprenticeship model that was invented many thousands of years ago: Watch an expert for a while, get increasingly involved, then start to help more junior members along. This process goes by many names—in surgery, it’s called “see one, do one, teach one”—but it always requires one-on-one collaboration in real work, and it’s therefore not remotely scalable.

Since the 1990s, our societies have invested heavily in the infrastructure needed to scale formal learning of explicit knowledge; think of the proliferation of online lectures, documents, quizzes, group chats, and bulletin boards. We need the equivalent infrastructure for embodied skill if we’re going to build the capabilities we need for new kinds of work.

My collaborators and I imagine our Surch platform evolving into an AI-enabled global GitHub for skill learning. Any form of procedural knowledge could be captured, studied, and shared on this kind of platform—supported by AI, people could efficiently and collaboratively learn how to shuck oysters, remove tree stumps, change the oil in their cars, and countless other tasks. Of course, we’ll be grateful and excited if our system makes a difference just for surgeons. But the world requires many skills that you can’t write down, and we need to find a modern way to keep these capabilities alive.

Pages