IEEE Spectrum Robotics

IEEE Spectrum
Subscribe to IEEE Spectrum Robotics feed IEEE Spectrum Robotics


Episode 2: How Labrador and iRobot Create Domestic Robots That Really Help

Evan Ackerman: I’m Evan Ackerman, and welcome to ChatBot, a new podcast from IEEE Spectrum where robotics experts interview each other about things that they find fascinating. On this episode of ChatBot, we’ll be talking with Mike Dooley and Chris Jones about useful robots in the home. Mike Dooley is the CEO and co-founder of Labrador Systems, the startup that’s developing an assistive robot in the form of a sort of semi-autonomous mobile table that can help people move things around their homes. Before founding Labrador, Mike led the development of Evolution Robotics’ innovative floor-cleaning robots. And when Evolution was acquired by iRobot in 2012, Mike became iRobot’s VP of product and business development. Labrador Systems is getting ready to launch its first robot, the Labrador Retriever, in 2023. Chris Jones is the chief technology officer at iRobot, which is arguably one of the most successful commercial robotics companies of all time. Chris has been at iRobot since 2005, and he spent several years as a senior investigator at iRobot research working on some of iRobot’s more unusual and experimental projects. iRobot Ventures is one of the investors in Labrador Systems. Chris, you were doing some interesting stuff at iRobot back in the day too, that I think a lot of people may not know how diverse iRobot’s robotics projects were.

Chris Jones: I think iRobot as a company, of course, being around since 1990, has done all sorts of things. Toys, commercial robots, consumer, military, industrial, all sorts of different things. But yeah, myself in particular, I spent the first seven, eight years of my time at iRobot doing a lot of super fun kind of far-out-there research types of projects, a lot of them funded by places like DARPA and working with some great academic collaborators, and of course, a whole crew of colleagues at iRobot. But yeah, some of those were ranged from completely squishy robots to robot arms to robots that could climb mountainsides to robots under the water, all sorts of different fun, useful, but fun, of course, and really challenging, which makes it fun, different types of robot concepts.

Ackerman: And those are all getting incorporated to the next generation Roomba, right?

Jones: I don’t know that I can comment on—

Ackerman: That’s not a no. Yeah. Okay. So Mike, I want to make sure that people who aren’t familiar with Labrador get a good understanding of what you’re working on. So can you describe kind of Labrador’s robot, what it does and why it’s important?

Mike Dooley: Yeah. So Labrador, we’re developing a robot called the Retriever, and it’s really designed as an extra pair of hands for individuals who have some issue either with pain, a health issue or injury that impacts their daily activities, particularly in the home. And so this is a robot designed to help people live more independently and to augment their abilities and give them some degree of autonomy back where they’re fighting that with the issue that they’re facing. And the robot, I think it’s been— after previewing its CES, it has been called a self-driving shelf. It’s designed to be really a mobile platform that’s about the size of a side table but has the ability to carry things as large as a laundry basket or set the dinner and plates on it, automatically navigates from place to place. It raises up to go up to countertop height when you’re by the kitchen sink and lowers down when you’re by your armchair. And it has the ability to retrieve too. So it’s a cross between robots that are used in warehousing to furniture mixed together to make something that’s comfortable and safe for the environment, but really is really meant to help folks where they have some difficulty moving themselves. This is meant to help them give that some degree of that independence back, as well as extend the impact of it for caregivers.

Ackerman: Yeah, I thought that was a fantastic idea when I first saw it at CES, and I’m so glad that you’ve been able to continue working on it. And especially with some support from folks like iRobot, right? Chris, iRobot is an investor in Labrador?

Jones: Correct. Through iRobot Ventures, we’re an early investor in Labrador. Of course, where that means, and we continue to be super excited about what they’re doing. I mean, for us, anyone who has great ideas for how robots can help people, in particular, assist people in their home with independent living, etc., I think is something we strongly believe is going to be a great application for robots. And when making investments, I’ll just add, of course, that earliest stage, a lot of it is about the team, right? And so Mike and the rest of his team are super compelling, right? That paired with a vision, that’s something that we believe is a great application for robots. It makes it an easy decision, right, to say there’s someone we’d like to support. So we love seeing their progress.

Ackerman: Yeah, me too.

Dooley: And we appreciate your support very much. So yeah.

Ackerman: All right, so what do you guys want to talk about? Mike, you want to kick things off?

Dooley: I can lead off. Yeah, so in full disclosure, at some point in my life, I was-- Chris, what’s the official name for an iRobot employee? I forgot what they came up with. It’s not iRoboteer, is it?

Jones: iRoboteer. Yeah.

Dooley: Okay, okay. All right, so I was an iRoboteer in my past life and crossed over with Chris for a number of years. And I know they’ve renovated the building a couple times now, but these products you mentioned or the robots you mentioned at the beginning, a lot of them are in display in a museum. And so I think my first question to Chris was, can you think of one of those, either that you worked on or maybe it didn’t, but you go, “Man, this should have taken off or this should have been this--” or it should have or you wished it would have. It would have been great if one of those that’s in there because there’s a lot, so.

Jones: Yes, there are a lot. You’re right. We have a museum, and it has been renovated in the last couple years, Mike, so you should come back and visit and check out the new updated museum. How would I answer that? There are so many things in there. I would say one that I have some sentimentality toward, and I think it holds some really compelling promise, even though at least to date, it hasn’t gone anywhere outside of the museum, Evan, is related to the squishy robots I was talking about. And in my mind, in one of the key challenges in unlocking future value in robots, and in particular, in autonomous robots, for example, in the home, is manipulation, is physical manipulation of the environment in the home. And Mike and Labrador are doing a little bit of this, right, by being able to maneuver and pick up, carry, drop off some things around the home. But the idea of a robot that’s able to physically pick up, grasp objects, pick them up off the floor, off a counter, open and close doors, all of those things is kind of the Holy Grail, right, if you can cost-effectively and robustly do that. In the home, there’s all sorts of great applications for that. And one of those research projects that’s in the museum was actually something called the Jamming Gripper. Mike, I don’t know if you remember seeing that at all, but this takes me back. And Evan, actually, I’m sure there are some IEEE stories and stuff back in the day from this. But this was an idea of a very compliant, it’s a soft manipulator. It’s not a hand. It’s actually very close to imagining a very soft membrane that’s filled with coffee grounds. So imagine a bag of coffee, right? Very soft and compliant.

But vacuum-packed coffee, you pull a vacuum on that bag. It turns rigid in the shape that it was in. It’s like a brick, which is a great concept for thinking about robot manipulation. That’s one idea. We had spent some research time with some folks in academia, had built a huge number of prototypes, and I still feel like there’s something there. There’s a really interesting concept there that can help with that more general purpose manipulation of objects in the home. So Mike, if you want to talk to us about licensing, maybe we can do that for Labrador with all your applications.

Dooley: Yeah. Actually, that’s what you should add. It would probably increase your budget dramatically, but you should add live demonstrations to the museum. See if you can have projects to get people to bring some of those back. Because I’m sure I saw it. I never knew it was doing that.

Jones: I mean, maybe we can continue this. There might be a little bit of a thread to continue that question into—the first one that came to my mind, Mike, when I was thinking about what to ask. And it’s something I have a lot of admiration or respect for you and how you do your job, which is you’re super good at engaging and listening to users kind of in their context to understand what their problems are. Such that you can best kind of articulate or define or ideate things that could help them address problems that they encounter in their everyday life. And that then allows you kind of as a leader, right, to use that to motivate quick prototype development to get the next level of testing or validation of what if this, right? And those things may or may not involve duct tape, right, involve some very crude things that are trying to elicit kind of that response or feedback from a user in terms of, is this something that would be valuable to you in overcoming some challenges that I’ve observed you having, let’s say, in your home environment? So I’m curious, Mike, how do you think about that process and how that translates into shaping a product design or the identification of an opportunity? I’m curious, maybe what you’ve learned through Labrador. I know you spent a lot of time in people’s homes to do exactly that. So I’m curious, how do you conduct that work? What are you looking for? How does that guide your development process?

Dooley: The word that you talk about is customer empathy, is are you feeling their pain? Are you understanding their need, and how are you connecting with it? And my undergrad’s in psychology, so I always was interested in what makes people think the way they do. I remember a iRobot study going into a home. And we were in the last day testing with somebody and a busy mom. And we’re testing Braava Jet. It’s a little robot that iRobot sells, that it’s really good for places with tight spaces for spraying and scrubbing floors, like kitchens and bathrooms. And the mom said, she almost said it was exhaustion, is that— I said, “What is it?” She says, “Does this do as good of a job as you could do?” And I think most people from iRobot would admit, “No. Can I match what the grease power, all the effort and everything I can put into this?” And she says, “But at least I can set this up, hit a button, and I can go to sleep. And at least it’s getting the job done. It’s doing something, and it gives me my time back.” And when you hear that, people go, “Well, Roomba is just something that cleans for people or whatever.” Like, “No. Roomba gives people their time back.” And once you’re on that channel, then you start thinking about, “Okay, what can we do more with the product that does that, that’s hitting that sort of core thing?” So yeah, and I think having the humbleness to not build a product you want, build it to the need, and then also the humbleness about where you can meet that need and where you can’t. Because robotics is hard, and we can’t make Rosey yet and things like that, so.

Ackerman: Mike, I’m curious, did you have to make compromises like that? Is there an example you could give with Labrador?

Dooley: Oh, jeez, all the— yeah. I mean, no, Labrador is perfect. No, I mean, we go through that all the time. I think on Labrador, no, we can’t do everything people want. What you’re trying to say, is it— I think there’s different languages of minimum viable product or good enough. There was somebody at Amazon used the term— I’m going to blank on it. It was like wonderful enough or something, or they have a nicer—

Jones: Lovable?

Dooley: Lovable. Yeah, lovable enough or something. And I think that that’s what you have to remember, is like, so on one hand, you have to be— you have to sort of have this open heart that you want to help people. And the other point, you have to have a really tight wallet because you just can’t spend enough to meet everything that people want. And so just a classic example is, Labrador goes up and down a certain amount of height. And people’s cabinets and someone in a wheelchair, they would love it if we would go up to the upper cabinets above the kitchen sink or other locations. And when you look at that, mechanically we can, but that then creates-- there’s product realities about stability and tilt testing. And so we have to fit those. Chris knows that well with Ava, for instance, is how heavy the base is for every inch you raise the mass above a certain amount. And so we have to make a limit. You have to say, “Hey, here’s the envelope. We’re going to do this to this, or we’re going to carry this much because that’s as much as we could deliver with this sort of function.” And then, is that lovable enough? Is that is that rewarding enough to people? And I think that’s the hard [inaudible], is that you have to do these deliveries within constraints. And I think sometimes when I’m talking to folks, they’re either outside robotics or they’re very much on the engineering side and not thinking about the product. They tend to think that you have to do everything. And it’s like that’s not how product development works, is you have to do just the critical first step, because then that makes this a category, and then you can do the next one and the next one. I think it brings to mind— Roomba has gone through an incredible evolution of what its functions were and how it worked and its performance since the very first version and to what Chris and team offer now. But if they tried to do the version today back then, they wouldn’t have been able to achieve it. And others fail because they probably went to the wrong angle. And yeah.

Jones: Evan, I think you asked if there are anything that was operating under constraints. I think product development in general, I presume, but certainly, robotics is all about constraints. It’s how do you operate within those? How do you understand where those boundaries are and having to make those calls as to— how are you going to have to— how are you going to decide to constrain your solution, right, to make sure that it’s something that’s feasible for you to do, right? It’s meeting a compelling need. It’s feasible for you to do. You can robustly deliver it. Trying to get that entire equation to work means you do have to reckon with those constraints kind of across the board to find the right solve. Mike, I’m curious. You do your user research, you have that customer empathy, you’ve perhaps worked through some of these surprising challenges that I’m sure you’ve encountered along the way with Labrador. You ultimately get to a point that you’re able to do pilots in homes, right? You’re actually now this— maybe the Duct Tape is gone or it’s at least hidden, right? It’s something that looks and feels more like a product and you’re actually getting into some type of more extended pilot of the product or idea of the product in users’ homes. What are the types of things you’re looking to accomplish with those pilots? Or what have you learned when you go from, “All right, I’ve been watching this user in their home with those challenges. So now I’m actually leaving something in their home without me being there and expecting them to be able to use it”? What’s the benefit or the learnings that you encounter in conducting that type of work?

Dooley: Yeah, it’s a weird type of experiment and there’s different schools of thought of how you do stuff. Some people want to go in and research everything to death and be a fly on the wall. And we went through this— I won’t say the source of it. A program we had to go through because of some of the— because of some of the funding that we’re getting from another project. And the quote in the beginning, they put up a slide that I think it’s from Steve Jobs. I’m sure I’m going to butcher it, that people don’t know what they want until I show them or something. I forget what the exact words are. And they were saying, “Yeah, that’s true for Steve Jobs, but for you, you can really talk to the customer and they’re going to tell you what they need.” I don’t believe that.

Jones: They need a faster horse, right? They don’t need a car.

Dooley: Yeah, exactly.

Jones: They’re going to tell you they need a faster horse.

Dooley: Yeah, so I’m in the Steve Jobs camp and on that. And it’s not because people aren’t intelligent. It’s just that they’re not in that world of knowing what possibilities you’re talking about. So I think there is this sort of soft skill between, okay, listen to their pain point. What is that difficulty of it? You’ve got a hypothesis to say, “Okay, out of everything you said, I think there’s an overlap here. And now I want to find out—” and we did that. We did that in the beginning. We did different ways of explaining the concept, and then the first level we did was just explain it over the phone and see what people thought of it and almost test it neutrally. Say, “Hey, here’s an idea.” And then, “Oh, here’s an idea like Roomba and here’s an idea like Alexa. What do you like or dislike?” Then we would actually build a prototype that was remote-controlled and brought it in their home, and now we finally do the leave-behind. And the whole thing is it’s like how to say it. It’s like you’re sort of releasing it to the world and we get out of the way. The next part is that it’s like letting a kid go and play soccer on their own and you’re not yelling or anything or don’t even watch. You just sort of let it happen. And what you’re trying to do is organically look at how are people— you’ve created this new reality. How are people interacting with it? And what we can see is the robots, they won’t do this in the future, but right now they talk on Slack. So when they send it to the kitchen, I can look up and I can see, “Hey, user one just sent it to the kitchen, and now they’re sending it to their armchair, and they’re probably having an afternoon snack. Oh, they sent it to the laundry room. Now they sent it over to the closet. They’re doing the laundry.” And the thing for us was just watching how fast were people adopting certain things, and then what were they using it for. And the striking thing that was—

Jones: That’s interesting.

Dooley: Yeah, go ahead.

Jones: I was just going to say, I mean, that’s interesting because I think I’m sure it’s very natural to put the product in someone’s home and kind of have a rigid expectation of, “No, no, this is how you use it. No, no, you’re doing it wrong. Let me show you how you use this.” But what you’re saying is it’s almost, yeah, you’re trying your best to solve their need here, but at some point you kind of leave it there, and now you’re also back into that empathy mode. It’s like, “Now with this tool, how do you use it?” and see kind of what happens.

Dooley: I think you said it in a really good way, is that you’ve changed this variable in the experiment. You’ve introduced this, and now you go back to just observing, just hearing what they’re— just watching what they’re doing with it, being as in-intrusive as possible, which is like, “We’re not there anymore.” Yeah, the robot’s logging it and we can see it, but it’s just on them. And we’re trying to stay out of the process and see how they engage with it. And that’s sort of like the thing that— we’ve shared it before, but we were just seeing that people were using it 90 to a 100 times a month, especially after the first month. It was like, we were looking at just the steady state. Would this become a habit or routine, and then what were they using it for?

Jones: So you’re saying when you see that, you have kind of a data point of one or a small number, but you have such a tangible understanding of the impact that this seems to be having, that you as an entrepreneur, right, that gives you a lot of confidence that may not be visible to whatever people that are outside the walls just trying to look at what you’re doing in the business. They see one data point, which is harder to grapple with, but you, being that close and understanding in that connection between what the product is doing and the needs that that gives you or the team a substantial confidence boost, right, is to, “This is working. We need to scale it. We have to show that this ports to other people in their homes, etc.,” but it gives you that confidence.

Dooley: Yeah, and then when we take the robots away, because we only have so many and we rotate them, getting the guilt trip emojis two months later from people, “I miss my robot. When are you going to build a new one?” and all that and stuff. So—

Jones: Do people name the robots?

Dooley: Yeah. They immediately do that and come up with creative names for it. One was called Rosey, naturally, but others was like— I’m forgetting the name she called it. It was inspired by a science fiction on an artificial AI companion and things. And it was just quite a bit of just different angles of— because she saw this as her assistant. She saw this as sort of this thing. But yeah, so I think that, again, for a robot, what you can see in the design is the classic thing at CES is to make a robot with a face and arms that doesn’t really do anything with those, but it pretends to be humanoid or human-like. And so we went the entire other route with this. And the fact that people then still relate to it that way, it means that-- we’re not trying to be cold or dispassionate. We’re just really interested in, can they get that value? Are they reacting to what the robot is doing, not to what the sort of halo that you sort of dressed it up as for that?

Jones: Yeah, I mean, as you know, like with Roomba or Braava and things like that, it’s the same thing. People project anthropomorphism or project that personality onto them, but that’s not really there, right, in a strong way. So yeah.

Dooley: Yeah, no, and it’s weird. And it’s something they do with robots in a weird way that they don’t-- people don’t name their dishwasher usually or something. But no, I would have-

Jones: You don’t?

Dooley: Yeah, [inaudible]. I did for a while. The stove got jealous, and then we had this whole thing when the refrigerator got into it.

Ackerman: I’ve heard anecdotally that maybe this was true with PackBots. I don’t know if it’s true with Roombas. That people want their robot back. They don’t want you to replace their old robot with a new robot. They want you to fix the old robot and have that same physical robot. It’s that lovely connection.

Jones: Yeah, certainly, PackBot on kind of the military robot side for bomb disposal and things like that, you would directly get those technicians who had a damaged robot, who they didn’t want a new robot. They wanted this one fixed, right? Because again, they anthropomorphize or there is some type of a bond there. And I think that’s been true with all of the robots, right? It’s something about the mobility, right, that embodies them with some type of a-- people project a personality on it. So they don’t have to be fancy and have arms and faces necessarily for people to project that on them. So that seems to be a common trait for any autonomously mobile platform.

Ackerman: Yeah. Mike, it was interesting to hear you say that. You’re being very thoughtful about that, and so I’m wondering if Chris, you can address that a little bit too. I don’t know if they do this anymore, but for a while, robots would speak to you, and I think it was a female voice that they had if they had an issue or something or needed to be cleaned. And that I always found to be an interesting choice because it’s sort of like the company is now giving this robot a human characteristic that’s very explicit. And I’m wondering how much thought went into that, and has that changed over the years about how much you’re willing to encourage people to anthropomorphize?

Jones: I mean, it’s a good question. I mean, that’s evolved, I would say, over the years, from not so much to there’s more of kind of a vocalization coming from the robot for certain scenarios. It is an important part. Some users, that is a primary way of interacting. I would say more of that type of feedback these days comes through more of kind of the mobile experience through the app to give both the feedback, additional information, actionable next steps. If you need to empty the dustbin or whatever it is, that that’s just a richer place to put that and a more accepted or common way for that to happen. So I don’t know, I would say that’s the direction things have trended, but I don’t know that that’s— that’s not because I don’t believe that we’re not trying to humanize the robot itself. It’s just more of a practical place where people these days will expect. It’s almost like Mike was saying about the dishwasher and the stove, etc. If everything is trying to talk to you like that or kind of project its own embodiment into your space, it could be overwhelming. So I think it’s easier to connect people at the right place and the right time with the right information, perhaps, if it’s through the mobile experience though.

But it is. That human-robot interaction or that experience design is a nuanced and tricky one. I’m certainly not an expert there myself, but it’s hard to find that right balance, that right mix of, what do you ask or expect of the user versus what do you assume or don’t give them an option? Because you also don’t want to overload them with too much information or too many options or too many questions, right, as you try to operate the product. So sometimes you do have to make assumptions, make defaults, right, that maybe can be changed if there’s really a need to that might require more digging. And Mike, I was curious. That was a question I had for you, was you have a physically, a meaningfully-sized product that’s operating autonomously in someone’s home, right?

Dooley: Yes.

Jones: Roomba can drive around and will navigate, and it’s a little more expected that we might bump into some things as we’re trying to clean and clean up against walls or furniture and all of that. Then it’s small enough that that isn’t an issue. How do you design for a product of the size that you’re working on, right? What went into kind of human-robot interaction side of that to allow for people who need to use this in their home that are not technologists, but they can take advantage of the— that can take advantage of the great value, right, that you’re trying to deliver for them. But it’s got to be super simple. How did you think about that HRI kind of design?

Dooley: There’s a lot wrapped into that. I think the bus stop is the first part of it. What’s the simplest way that they can command in a metaphor? Like everybody can relate to armchair or front door, that sort of thing. And so that idea that the robot just goes to these destinations is super simplifying. People get that. It’s almost now at a nanosecond how fast they get that and that metaphor. So that was one of it. And then you sort of explain the rules of the road of how the robot can go from place to place. It’s got these bus routes, but they’re elastic and that it can go around you if needed. But there’s all these types of interactions. Okay, we figured out what happens when you’re coming down the hall and the robot’s coming down. Let’s say you’re somebody else and they just walk towards each other. And I know in hospitals, the robot’s programmed to go to the side of the corridor. There’s no side in a home. That’s the stuff. So those are things that we still have to iron out, but there’s timeouts and there’s things of—that’s where we’ll be—we’re not doing it yet, but it’d be great to recognize that’s a person, not a closed door or something and respond to it. So right now, we have to tell the users, “Okay, it’ll spin a time to make sure you’re there, but then it’ll give up. And if you really wanted to, you could tell it to go back from your app. You could get out of the way if you want, or you could stop it by doing this.”

And so that’ll get refined as we get to the market, but those interactions, yeah, you’re right. You have this big robot that’s coming down. And one of the surprising things was it’s not just people. One of the women in the pilot had a Border Collie, and their Border Collie’s, by instinct, bred to herd sheep. So it would hear the robot. The robot’s very quiet, but she would command it. It would hear the robot coming down the hall and it would put its paw out to stop it, and that became it’s game. It started herding the robot. And so it’s really this weird thing, this metaphor you’re getting at.

Jones: Robots are pretty stubborn. The robot probably just sat there for like five minutes, like, “Come on. Who’s going to blink?”

Dooley: Yeah. Yeah. And the AI we’d love to add, we have to catch up with where you guys are at or license some of your vision recognition algorithms because, first, we’re trying to navigate and avoid obstacles. And that’s where all the tech is going into in terms of the design and the tiers of safety that we’re doing. But it’s just like what the user wanted in that case is, if it’s the dog, can you play my voice, say, “Get out” or, “Move,” or whatever, or something, “Go away”? Because she sent me a video of this. It’s like it was happening to her too, is she would send the robot out. The dogs would get all excited, and she’s behind it in her wheelchair. And now the dogs are waiting for her on the other side of the robot, the robot’s wondering what to do, and they’re all in the hall. And so yeah, there’s this sort of complication that gets in there that you have multiple agents going on there.

Ackerman: Maybe one more question from each of you guys. Mike, you want to go first?

Dooley: I’m trying to think. I have one more. And when you have new engineers start—let’s say they haven’t worked on robots before. They might be experienced. They’re coming out of school or they’re from other industries and they’re coming in. What is some key thing that they learn, or what sort of transformation goes on in their mind when they finally get in the zone of what it means to develop robots? And it’s a really broad question, but there’s sort of a rookie thing.

Jones: Yeah. What’s an aha moment that’s common for people new to robotics? And I think this is woven throughout this entire conversation here, which is, macro level, robots are actually hard. They’re difficult to kind of put the entire electromechanical software system together. It’s hard to perceive the world. If a robot’s driving around the home on its own, it needs to have a pretty good understanding of kind of what’s around it. Is something there, is something not there? The richer that understanding can be, the more adaptable or personalized that it can be. But generating that understanding is also hard. They have to be built to deal with all of those unanticipated scenarios that they’re going to encounter when they’re let out into the wild. So it’s that I think it’s surprising to a lot of people how long that long tail of corner cases ends up being that you have to grapple with. If you ignore one of them, it can mean it can end the product, right? It’s a long tail of things. Any one of them ends up, if it rears its head enough for those users, they’ll stop using the product because, “Well, this thing doesn’t work, and this has happened like twice to me now in the year I’ve had it. I’m kind of done with it,” right?

So you really have to grapple with the very long, long tail of corner cases when the technology hits the real world. I think that’s a super surprising one for people who are new to robotics. It’s more than a hardware consumer product company, consumer electronics company. You do need to deal with those challenges of perception, mobility in the home, the chaos of— specifically, you’re talking about more of the home environment, not the more structured environment and the industrial side. And I think that’s something that everyone has to go through that learning curve of understanding the impact that can have.

Dooley: Yeah. Of the dogs and cats.

Jones: Yeah, I mean, who would have thought cats are going to jump on the thing or Border Collies are going to try to herd it, right? And you have to just-- and you don’t learn those things until you get products out there. And that’s, Mike, what I was asking you about pilots and what do you hope to learn or the experience there. Is you have to take that step if you’re going to start kind of figuring out what those elements are going to start looking like. It’s very hard to do just intellectually or on paper or in the lab. You have to let them out there. So that’s a learning lesson there. Mike, maybe a similar question for you, but--

Ackerman: This is the last one, so make it a good one.

Jones: Yep. The last one, it better be a good one, huh? It’s a similar question for you, but maybe cut more on address to an entrepreneur in the robotic space. I’m curious, for a robot company to succeed, there’s a lot of, I’ll call them, ecosystem partners, right, that have to be there. Manufacturing, channel, or go-to-market partners, funding, right, to support a capital-intensive development process, and many more. I’m curious, what have you learned or what do people need to going into a robotics development or looking to be a robotics entrepreneur, what do people miss? What have you learned? What have you seen? What are the partners that are the most important? And I’m not asking for, “Oh, iRobot’s an investor. Speak nicely on the financial investor side.” That’s not what I’m after. But what have you learned, that you better not ignore this set of partners because if one of them falls through or it doesn’t work or is ineffective, it’s going to be hard for all the other pieces to come together?

Dooley: Yeah, it’s complex. I think just like you said, robots is hard. I think when we got acquired by iRobot and we were having some of the first meetings over— it’s Mike from software. Halloran.

Ackerman: This was Evolution Robotics?

Dooley: Evolution. Yeah, but Mike Halloran from iRobot, we came to the office at the Evolution’s office, and he just said, “Robots are hard. They’re really hard.” And it’s like, that’s the point we knew there was harmony. We were sort of under this thing. And so for everything what Chris is saying is that all of that is high stakes. And so you sort of have to be-- you have to be good enough on all those fronts of all those partners. And so some of it is critical path technology. Depth cameras, that function is really critical to us, and it’s critical to work well and then cost and scale. And so just being flexible about how we can deal with that and looking at that sort of chain and how do we sort of start at one level and scale it through? So you look at sort of, okay, what are these key enabling technologies that have to work? And that’s one bucket that are there. Then the partnerships on the business side, we’re in a complex ecosystem. I think the other rude awakening when people look at this is like, “Well, yeah, why doesn’t-- as people get older, they have disabilities. That’s what you have-- that’s your insurance funds.” It’s like, “No, it doesn’t.” It doesn’t for a lot of-- unless you have specific types of insurance. We’re partnering with Nationwide. They have long-term care insurance - and that’s why they’re working with us - that pays for these sorts of issues and things. Or Medicaid will get into these issues depending on somebody’s need.

And so I think what we’re trying to understand is—this goes back to that original question about customer empathy—is that how do we adjust what we’re doing? That we have this vision. I want to help people like my mom where she is now and where she was 10 years ago when she was experiencing difficulties with mobility initially. And we have to stage that. We have to get through that progression. And so who are the people that we work with now that solves a pain point that can be something that they have control over that is economically viable to them? And sometimes that means adjusting a bit of what we’re doing, because it’s just this step onto the long path as we do it.

Ackerman: Awesome. Well, thank you both again. This was a great conversation.

Jones: Yeah, thanks for having us and for hosting, Evan and Mike. Great to talk to you.

Dooley: Nice seeing you again, Chris and Evan. Same. Really enjoyed it.

Ackerman: We’ve been talking with Chris Jones from iRobot and Mike Dooley from Labrador Systems about developing robots for the home. And thanks again to our guests for joining us, for ChatBot and IEEE Spectrum. I’m Evan Ackerman.


Chatbot Episode 1: Making Boston Dynamics’ Robots Dance

Evan Ackerman: I’m Evan Ackerman, and welcome to ChatBot, a robotics podcast from IEEE Spectrum. On this episode of ChatBot, we’ll be talking with Monica Thomas and Amy LaViers about robots and dance. Monica Thomas is a dancer and choreographer. Monica has worked with Boston Dynamics to choreograph some of their robot videos in which Atlas, Spot, and even Handle dance to songs like Do You Love Me? The, “Do You Love Me?” Video has been viewed 37 million times. And if you haven’t seen it yet, it’s pretty amazing to see how these robots can move. Amy LaViers is the director of the Robotics, Automation, and Dance Lab, or RAD lab, which she founded in 2015 as a professor in Mechanical Science and Engineering at the University of Illinois, Urbana-Champaign. The RAD Lab is a collective for art making, commercialization, education, outreach, and research at the intersection of dance and robotics. And Amy’s work explores the creative relationships between machines and humans, as expressed through movement. So Monica, can you just tell me-- I think people in the robotics field may not know who you are or why you’re on the podcast at this point, so can you just describe how you initially got involved in Boston Dynamics?

Monica Thomas: Yeah. So I got involved really casually. I know people who work at Boston Dynamics and Marc Raibert, their founder and head. They’d been working on Spot, and they added the arm to Spot. And Marc was kind of like, “I kind of think this could dance.” And they were like, “Do you think this could dance?” And I was like, “It could definitely dance. That definitely could do a lot of dancing.” And so we just started trying to figure out, can it move in a way that feels like dance to people watching it? And the first thing we made was Uptown Spot. And it was really just figuring out moves that the robot does kind of already naturally. And that’s when they started developing, I think, Choreographer, their tool. But in terms of my thinking, it was just I was watching what the robot did as its normal patterns, like going up, going down, walking this place, different steps, different gates, what is interesting to me, what looks beautiful to me, what looks funny to me, and then imagining what else we could be doing, considering the angles of the joints. And then it just grew from there. And so once that one was out, Marc was like, “What about the rest of the robots? Could they dance? Maybe we could do a dance with all of the robots.” And I was like, “We could definitely do a dance with all of the robots. Any shape can dance.” So that’s when we started working on what turned into Do You Love Me? I didn’t really realize what a big deal it was until it came out and it went viral. And I was like, “Oh—” are we allowed to swear, or—?

Ackerman: Oh, yeah. Yeah.

Thomas: Yeah. So I was like, “[bleep bleep, bleeeep] is this?” I didn’t know how to deal with it. I didn’t know how to think about it. As a performer, the largest audience I performed for in a day was like 700 people, which is a big audience as a live performer. So when you’re hitting millions, it’s just like it doesn’t even make sense anymore, and yeah. So that was pretty mind-boggling. And then also because of kind of how it was introduced and because there is a whole world of choreo-robotics, which I was not really aware of because I was just doing my thing. Then I realized there’s all of this work that’s been happening that I couldn’t reference, didn’t know about, and conversations that were really important in the field that I also was unaware of and then suddenly was a part of. So I think doing work that has more viewership is really—it was a trip and a half—is a trip and a half. I’m still learning about it. Does that answer your question?

Ackerman: Yeah. Definitely.

Thomas: It’s a long-winded answer, but.

Ackerman: And Amy, so you have been working in these two disciplines for a long time, in the disciplines of robotics and in dance. So what made you decide to combine these two things, and why is that important?

Amy LaViers: Yeah. Well, both things, I guess in some way, have always been present in my life. I’ve danced since I was three, probably, and my dad and all of his brothers and my grandfathers were engineers. So in some sense, they were always there. And it was really-- I could tell you the date. I sometimes forget what it was, but it was a Thursday, and I was taking classes and dancing and controlling of mechanical systems, and I was realizing this over. I mean, I don’t think I’m combining them. I feel like they already kind of have this intersection that just exists. And I realized-- or I stumbled into that intersection myself, and I found lots of people working in it. And I was-- oh, my interests in both these fields kind of reinforce one another in a way that’s really exciting and interesting. I also happened to be an almost graduating-- I was in last class of my junior year of college, so I was thinking, “What am I going to do with myself?” Right? So it was very happenstance in that way. And again, I mean, I just felt like— it was like I walked into a room where all of a sudden, a lot of things made sense to me, and a lot of interests of mine were both present.

Ackerman: And can you summarize, I guess, the importance here? Because I feel like— I’m sure this is something you’ve run into, is that it’s easy for engineers or roboticists just to be— I mean, honestly, a little bit dismissive of this idea that it’s important for robots to have this expressivity. So why is it important?

LaViers: That is a great question that if I could summarize what my life is like, it’s me on a computer going like this, trying to figure out the words to answer that succinctly. But one way I might ask it, earlier when we were talking, you mentioned this idea of functional behavior versus expressive behavior, which comes up a lot when we start thinking in this space. And I think one thing that happens-- and my training and background in Laban Movement Analysis really emphasizes this duality between function and expression as opposed to the either/or. It’s kind of like the mind-body split, the idea that these things are one integrated unit. Function and expression are an integrated unit. And something that is functional is really expressive. Something that is expressive is really functional.

Ackerman: It definitely answers the question. And it looks like Monica is resonating with you a little bit, so I’m just going to get out of the way here. Amy, do you want to just start this conversation with Monica?

LaViers: Sure. Sure. Monica has already answered, literally, my first question, so I’m already having to shuffle a little bit. But I’m going to rephrase. My first question was, can robots dance? And I love how emphatically and beautifully you answered that with, “Any shape can dance.” I think that’s so beautiful. That was a great answer, and I think it brings up— you can debate, is this dance, or is this not? But there’s also a way to look at any movement through the lens of dance, and that includes factory robots that nobody ever sees.

Thomas: It’s exciting. I mean, it’s a really nice way to walk through the world, so I actually recommend it for everyone, just like taking a time and seeing the movement around you as dance. I don’t know if it’s allowing it to be intentional or just to be special, meaningful, something.

LaViers: That’s a really big challenge, particularly for an autonomous system. And for any moving system, I think that’s hard, artificial or not. I mean it’s hard for me. My family’s coming into town this weekend. I’m like, “How do I act so that they know I love them?” Right? That’s dramaticized version of real life, right, is, how do I be welcoming to my guests? And that’ll be, how do I move?

Thomas: What you’re saying is a reminder of, one of the things that I really enjoy watching robots move is that I’m allowed to project as much as I want to on them without taking away something from them. When you project too much on people, you lose the person, and that’s not really fair. But when you’re projecting on objects, things that are objects but that we personify— or not even personify, that we anthropomorphize or whatever, it is just a projection of us. But it’s acceptable. So nice for it to be acceptable, a place where you get to do that.

LaViers: Well, okay. Then can I ask my fourth question even though it’s not my turn? Because that’s just too perfect to what it is, which is just, what did you learn about yourself working with these robots?

Thomas: Well, I learned how much I love visually watching movement. I’ve always watched, but I don’t think it was as clear to me how much I like movement. The work that I made was really about context. It was about what’s happening in society, what’s happening in me as a person. But I never got into that school of dance that really spends time just really paying attention to movement or letting movement develop or explore, exploring movement. That wasn’t what I was doing. And with robots, I was like, “Oh, but yeah, I get it better now. I see it more now.” So much in life right now, for me, is not contained, and it doesn’t have answers. And translating movement across species from my body to a robot, that does have answers. It has multiple answers. It’s not like there’s a yes and a no, but you can answer a question. And it’s so nice to answer questions sometimes. I sat with this thing, and here’s something I feel like is an acceptable solution. Wow. That’s a rarity in life. So I love that about working with robots. I mean, also, they’re cool, I think. And it is also— they’re just cool. I mean, that’s true too. It’s also interesting. I guess the last thing that I really loved—and I didn’t have much opportunity to do this or as much as you’d expect because of COVID—is being in space with robots. It’s really interesting, just like being in space with anything that is different than your norm is notable. Being in space with an animal that you’re not used to being with is notable. And there’s just something really cool about being with something very different. And for me, robots are very different and not acclimatized.

Ackerman: Okay. Monica, you want to ask a question or two?

Thomas: Yeah. I do. The order of my questions is ruined also. I was thinking about the RAD Lab, and I was wondering if there are guiding principles that you feel are really important in that interdisciplinary work that you’re doing, and also any lessons maybe from the other side that are worth sharing.

LaViers: The usual way I describe it and describe my work more broadly is, I think there are a lot of roboticists that hire dancers, and they make robots and those dancers help them. And there are a lot of dancers that they hire engineers, and those engineers build something for them that they use inside of their work. And what I’m interested in, in the little litmus test or challenge I paint for myself and my collaborators is we want to be right in between those two things, right, where we are making something. First of all, we’re treating each other as peers, as technical peers, as artistic peers, as— if the robot moves on stage, I mean, that’s choreography. If the choreographer asks for the robot to move in a certain way, that’s robotics. That’s the inflection point we want to be at. And so that means, for example, in terms of crediting the work, we try to credit the creative contributions. And not just like, “Oh, well, you did 10 percent of the creative contributions.” We really try to treat each other as co-artistic collaborators and co-technical developers. And so artists are on our papers, and engineers are in our programs, to put it in that way. And likewise, that changes the questions we want to ask. We want to make something that pushes robotics just a inch further, a millimeter further. And we want to do something that pushes dance just an inch further, a millimeter further. We would love it if people would ask us, “Is this dance?” We get, “Is this robotics?” Quite a lot. So that makes me feel like we must be doing something interesting in robotics.

And every now and then, I think we do something interesting for dance too, and certainly, many of my collaborators do. And that inflection point, that’s just where I think is interesting. And I think that’s where— that’s the room I stumbled into, is where we’re asking those questions as opposed to just developing a robot and hiring someone to help us do that. I mean, it can be hard in that environment that people feel like their expertise is being given to the other side. And then, where am I an expert? And we’ve heard editors at publication venues say, “Well, this dancer can’t be a co-author,” and we’ve had venues where we’re working on the program and people say, “Well, no, this engineer isn’t a performer,” but I’m like, “But he’s queuing the robot, and if he messes up, then we all mess up.” I mean, that’s vulnerability too. So we have those conversations that are really touchy and a little sensitive and a little— and so how do you create that space where people do you feel safe and comfortable and valued and attributed for their work and that they can make a track record and do this again in another project, in another context and— so, I don’t know, if I’ve learned anything, I mean, I’ve learned that you just have to really talk about attribution all the time. I bring it up every time, and then I bring it up before we even think about writing a paper. And then I bring it up when we make the draft. And first thing I put in the draft is everybody’s name in the order it’s going to appear, with the affiliations and with the—subscripts on that don’t get added at the last minute. And when the editor of a very famous robotics venue says, “This person can’t be a co-author,” that person doesn’t get taken off as a co-author; that person is a co-author, and we figure out another way to make it work. And so I think that’s learning, or that’s just a struggle anyway.

Ackerman: Monica, I’m curious if when you saw the Boston Dynamics videos go viral, did you feel like there was much more of a focus on the robots and the mechanical capabilities than there was on the choreography and the dance? And if so, how did that make you feel?

Thomas: Yeah. So yes. Right. When dances I’ve made have been reviewed, which I’ve always really appreciated, it has been about the dance. It’s been about the choreography. And actually, kind of going way back to what we were talking about a couple things ago, a lot of the reviews that you get around this are about people, their reactions, right? Because, again, we can project so much onto robots. So I learned a lot about people, how people think about robots. There’s a lot of really overt themes, and then there’s individual nuance. But yeah, it wasn’t really about the dance, and it was in the middle of the pandemic too. So there’s really high isolation. I had no idea how people who cared about dance thought about it for a long time. And then every once in a while, I get one person here or one person there say something. So it’s a totally weird experience. Yes.

The way that I took information about the dance was kind of paying attention to the affective experience, the emotional experience that people had watching this. The dance was— nothing in that dance was— we use the structures of the traditions of dance in it for intentional reason. I chose that because I wasn’t trying to alarm people or show people ways that robots move that totally hit some old part of our brain that makes us absolutely panicked. That wasn’t my interest or the goal of that work. And honestly, at some point, it’d be really interesting to explore what the robots can just do versus what I, as a human, feel comfortable seeing them do. But the emotional response that people got told me a story about what the dance was doing in a backward-- also, what the music’s doing because—let’s be real—that music does— right? We stacked the deck.

LaViers: Yeah. And now that brings— I feel like that serves up two of my questions, and I might let you pick which one maybe we go to. I mean, one of my questions, I wrote down some of my favorite moments from the choreography that I thought we could discuss. Another question—and maybe we can do both of these in serie—is a little bit about— I’ll blush even just saying it, and I’m so glad that the people can’t see the blushing. But also, there’s been so much nodding, and I’m noticing that that won’t be in the audio recording. We’re nodding along to each other so much. But the other side—and you can just nod in a way that gives me your—the other question that comes up for that is, yeah, what is the monetary piece of this, and where are the power dynamics inside this? And how do you feel about how that sits now as that video continues to just make its rounds on the internet and establish value for Boston Dynamics?

Thomas: I would love to start with the first question. And the second one is super important, and maybe another day for that one.

Ackerman: Okay. That’s fair. That’s fair.

LaViers: Yep. I like that. I like that. So the first question, so my favorite moments of the piece that you choreographed to Do You Love Me? For the Boston Dynamics robots, the swinging arms at the beginning, where you don’t fully know where this is going. It looks so casual and so, dare I say it, natural, although it’s completely artificial, right? And the proximal rotation of the legs, I feel like it’s a genius way of getting around no spine. But you really make use of things that look like hip joints or shoulder joints as a way of, to me, accessing a good wriggle or a good juicy moment, and then the Spot space hold, I call it, where the head of the Spot is holding in place and then the robot wiggles around that, dances around that. And then the moment when you see all four complete—these distinct bodies, and it looks like they’re dancing together. And we touched on that earlier—any shape can dance—but making them all dance together I thought was really brilliant and effective in the work. So it’s one of those moments, super interesting, or you have a funny story about, I thought we could talk about it further.

Thomas: I have a funny story about the hip joints. So the initial— well, not the initial, but when they do the mashed potato, that was the first dance move that we started working on, on Atlas. And for folks who don’t know, the mashed potato is kind of the feet are going in and out; the knees are going in and out. So we ran into a couple of problems, which—and the twist. I guess it’s a combo. Both of them like you to roll your feet on the ground like rub, and that friction was not good for the robots. So when we first started really moving into the twist, which has this torso twisting— the legs are twisting. The foot should be twisting on the floor. The foot is not twisting on the floor, and the legs were so turned out that the shape of the pelvic region looked like a over-full diaper. So, I mean, it was wiggling, but it made the robot look young. It made the robot look like it was in a diaper that needed to be changed. It did not look like a twist that anybody would want to do near anybody else. And it was really amazing how— I mean, it was just hilarious to see it. And the engineers come in. They’re really seeing the movement and trying to figure out what they need for the movement. And I was like, “Well, it looks like it has a very full diaper.” And they were like, “Oh.” They knew it didn’t quite look right, but it was like—because I think they really don’t project as much as I do, I’m very projective that’s one of the ways that I’ve watched work, or you’re pulling from the work that way, but that’s not what they were looking at. And so yeah, then you change the angles of the legs, how turned in it is and whatever, and it resolved to a degree, I think, fairly successfully. It doesn’t really look like a diaper anymore. But that wasn’t really— and also to get that move right took us over a month.

Ackerman: Wow.

LaViers: Wow.

Thomas: We got much faster after that because it was the first, and we really learned. But it took a month of programming, me coming in, naming specific ways of reshifting it before we got a twist that felt natural if amended because it’s not the same way that--

LaViers: Yeah. Well, and it’s fascinating to think about how to get it to look the same. You had to change the way it did the movement, is what I heard you describing there, and I think that’s so fascinating, right? And just how distinct the morphologies between our body and any of these bodies, even the very facile human-ish looking Atlas, that there’s still a lot of really nuanced and fine-grained and human work-intensive labor to go into getting that to look the same as what we all think of as the twist or the mashed potato.

Thomas: Right. Right. And it does need to be something that we can project those dances onto, or it doesn’t work, in terms of this dance. It could work in another one. Yeah.

LaViers: Right. And you brought that up earlier, too, of trying to work inside of some established forms of dance as opposed to making us all terrified by the strange movement that can happen, which I think is interesting. And I hope one day you get to do that dance too.

Thomas: Yeah. No, I totally want to do that dance too.

Ackerman: Monica, do you have one last question you want to ask?

Thomas: I do. And this is— yeah. I want to ask you, kind of what does embodied or body-based intelligence offer in robotic engineering? So I feel like, you, more than anyone, can speak to that because I don’t do that side.

LaViers: Well, I mean, I think it can bring a couple of things. One, it can bring— I mean, the first moment in my career or life that that calls up for me is, I was watching one of my lab mates, when I was a doctoral student, give a talk about a quadruped robot that he was working on, and he was describing the crawling strategy like the gate. And someone said— and I think it was roughly like, “Move the center of gravity inside the polygon of support, and then pick up— the polygon of support formed by three of the legs. And then pick up the fourth leg and move it. Establish a new polygon of support. Move the center of mass into that polygon of support.” And it’s described with these figures. Maybe there’s a center of gravity. It’s like a circle that’s like a checkerboard, and there’s a triangle, and there’s these legs. And someone stands up and is like, “That makes no sense like that. Why would you do that?” And I’m like, “Oh, oh, I know, oh, because that’s one of the ways you can crawl.” I actually didn’t get down on the floor and do it because I was not so outlandish at that point.

But today, in the RAD lab, that would be, “Everyone on all fours, try this strategy out.” Does it feel like a good idea? Are there other ideas that we would use to do this pattern that might be worth exploring here as well? And so truly rolling around on the floor and moving your body and pretending to be a quadruped, which— in my dance classes, it’s a very common thing to practice crawling because we all forget how to crawl. We want to crawl with the cross-lateral pattern and the homo-lateral pattern, and we want to keep our butts down-- or keep the butts up, but we want to have that optionality so that we look like we’re facile, natural crawlers. We train that, right? And so for a quadruped robot talk and discussion, I think there’s a very literal way that an embodied exploration of the idea is a completely legitimate way to do research.

Ackerman: Yeah. I mean, Monica, this is what you were saying, too, as you were working with these engineers. Sometimes it sounded like they could tell that something wasn’t quite right, but they didn’t know how to describe it, and they didn’t know how to fix it because they didn’t have that language and experience that both of you have.

Thomas: Yeah. Yeah, exactly that.

Ackerman: Okay. Well, I just want to ask you each one more really quick question before we end here, which is that, what is your favorite fictional robot and why? I hope this isn’t too difficult, especially since you both work with real robots, but. Amy, you want to go first?

LaViers: I mean, I’m going to feel like a party pooper. I don’t like any robots, real or fictional. The fictional ones annoy me because-- the fictional ones annoy me because of the disambiguation issue and WALL-E and Eva are so cute. And I do love cute things, but are those machines, or are those characters? And are we losing sight of that? I mean, my favorite robot to watch move, this one-- I mean, I love the Keepon dancing to Spoon. That is something that if you’re having an off day, you google Keepon dancing to Spoon— Keepon is one word, K-E-E-P-O-N, dancing to Spoon, and you just bop. It’s just a bop. I love it. It’s so simple and so pure and so right.

Ackerman: It’s one of my favorite robots of all time, Monica. I don’t know if you’ve seen this, but it’s two little yellow balls like this, and it just goes up and down and rocks back and forth. But it does it so to music. It just does it so well. It’s amazing.

Thomas: I will definitely be watching that [crosstalk].

Ackerman: Yeah. And I should have expanded the question, and now I will expand it because Monica hasn’t answered yet. Favorite robot, real or fictional?

Thomas: So I don’t know if it’s my favorite. This one breaks my heart, and I’m currently having an empathy overdrive issue as a general problem. But there’s a robot installation - and I should know its name, but I don’t— where the robot reaches out, and it grabs the oil that they’ve created it to leak and pulls it towards its body. And it’s been doing this for several years now, but it’s really slowing down now. And I don’t think it even needs the oil. I don’t think it’s a robot that uses oil. It just thinks that it needs to keep it close. And it used to happy dance, and the oil has gotten so dark and the red rust color of, oh, this is so morbid of blood, but it just breaks my heart. So I think I love that robot and also want to save it in the really unhealthy way that we sometimes identify with things that we shouldn’t be thinking about that much.

Ackerman: And you both gave amazing answers to that question.

LaViers: And the piece is Sun Yuan and Peng Yu’s Can’t Help Myself.

Ackerman: That’s right. Yeah.

LaViers: And it is so beautiful. I couldn’t remember the artist’s name either, but—you’re right—it’s so beautiful.

Thomas: It’s beautiful. The movement is beautiful. It’s beautifully considered as an art piece, and the robot is gorgeous and heartbreaking.

Ackerman: Yeah. Those answers were so unexpected, and I love that. So thank you both, and thank you for being on this podcast. This was an amazing conversation. We didn’t have nearly enough time, so we’re going to have to come back to so much.

LaViers: Thank you for having me.

Thomas: Thank you so much for inviting me. [music]

Ackerman: We’ve been talking with Monica Thomas and Amy LaViers about robots and dance. And thanks again to our guests for joining us for ChatBot and IEEE Spectrum. I’m Evan Ackerman.


Microfliers, or miniature wireless robots deployed in numbers, are sometimes used today for large-scale surveillance and monitoring purposes, such as in environmental or biological studies. Because of the fliers’ ability to disperse in air, they can spread out to cover large areas after being dropped from a single location, including in places where access is otherwise difficult. Plus, they are smaller, lighter, and cheaper to deploy than multiple drones.

One of the challenges in creating more efficient microfliers has been in reducing power consumption. One way to do so, as researchers from the University of Washington (UW) and Université Grenoble Alpes have demonstrated, is to get rid of the battery. With inspiration from the Japanese art of paper folding, origami, they designed programmable microfliers that can disperse in the wind and change shape using electronic actuation. This is achieved by a solar-powered actuator that can produce up to 200 millinewtons of force in 25 milliseconds.

“Think of these little fliers as a sensor platform to measure environmental conditions, like, temperature, light, and other things.”
—Vikram Iyer, University of Washington

“The cool thing about these origami designs is, we’ve created a way for them to change shape in midair, completely battery free,” says Vikram Iyer, computer scientist and engineer at UW, one of the authors. “It’s a pretty small change in shape, but it creates a very dramatic change in falling behavior…that allows us to get some control over how these things are flying.”

Tumbling and stable states: A) The origami microflier here is in its tumbling state and B) postlanding configuration. As it descends, the flier tumbles, with a typical tumbling pattern pictured in C. D) The origami microflier is here in its stable descent state. The fliers’ range of landing locations, E, reveals its dispersal patterns after being released from its parent drone. Vicente Arroyos, Kyle Johnson, and Vikram Iyer/University of Washington

This research builds on the researchers’ earlier work published in 2022, demonstrating sensors that can disperse in air like dandelion seeds. For the current study, “the goal was to deploy hundreds of these sensors and control where they land, to achieve precise deployments,” says coauthor Shyamnath Gollakota, who leads the Mobile Intelligence Lab at WU. The microfliers, each weighing less than 500 milligrams, can travel almost 100 meters in a light breeze, and wirelessly transmit data about air pressure and temperature via Bluetooth up to a distance of 60 meters. The group’s findings were published in Science Robotics earlier this month.

Discovering the difference in the falling behavior of the two origami states was serendipity, Gollakota says: “When it is flat, it’s almost like a leaf, tumbling [in the] the wind,” he says. “A very slight change from flat to a little bit of a curvature [makes] it fall like a parachute in a very controlled motion.” In their tumbling state, in lateral wind gusts, the microfliers achieve up to three times the dispersal distance as in their stable state, he adds.

This close-up of the microflier reveals the electronics and circuitry on its top side.Vicente Arroyos, Kyle Johnson, and Vikram Iyer/University of Washington

There have been other origami-based systems in which motors, electrostatic actuators, shape-memory alloys, and electrothermal polymers, for example, have been used, but these did not address the challenges facing the researchers, Gollakota says. One was to find the sweet spot between an actuation mechanism strong enough to not change shape without being triggered, yet lightweight enough to keep power consumption low. Next, it had to produce a rapid transition response while falling to the ground. Finally, it needed to have a lightweight energy storage solution onboard to trigger the transition.

The mechanism, which Gollakota describes as “pretty commonsensical” still took them a year to come up with. There’s a stem in the middle of the origami, comprising a solenoid coil (a coil that acts as a magnet when a current passes through it), and two small magnets. Four hinged carbon-fiber rods attach the stem to the edges of the structure. When a pulse of current is applied to the solenoid coil, it pushes the magnets toward each other, making the structure snap into its alternative shape.

All it requires is a tiny bit of power, just enough to put the magnets within the right distance from each other for the magnetic forces to work, Gollakota says. There is an array of thin, lightweight solar cells to harvest energy, which is stored in a little capacitor. The circuit is fabricated directly on the foldable origami structure, and also includes a microcontroller, timer, Bluetooth receiver, and pressure and temperature sensors.

“We can program these things to trigger the shape change based on any of these things—after a fixed time, when we send it a radio signal, or, at an altitude [or temperature] that this device detects,” Iyer adds. The origami structure is bistable, meaning it does not need any energy to maintain shape once it has transitioned.

The researchers say their design can be extended to incorporate sensors for a variety of environmental monitoring applications. “Think of these little fliers as a sensor platform to measure environmental conditions, like temperature, light, and other things, [and] how they vary throughout the atmosphere,” Iyer says. Or they can deploy sensors on the ground for things like digital agriculture, climate change–related studies, and tracking forest fires.

In their current prototype, the microfliers only shape-change in one direction, but the researchers want to make them transition in both directions, to be able to toggle the two states, and control the trajectory even better. They also imagine a swarm of microfliers communicating with one another, controlling their behavior, and self-organizing how they are falling and dispersing.



There seems to be two general approaches to cooking automation. There’s the “let’s make a robot that can operate in a human kitchen because everyone has a human kitchen,” which seems like a good idea, except that you then have to build your robot to function in human environments which is super hard. On the other end of the spectrum, there’s the “let’s make a dedicated automated system because automation is easier than robotics,” which seems like a good idea, except that you then have to be willing to accept compromises in recipes and texture and taste because preparing food in an automated way just does not yield the same result, as anyone who has ever attempted to Cuisinart their way out of developing some knife skills can tell you.

The Robotics and Mechanisms Lab (RoMeLa) at UCLA, run by Dennis Hong, has been working on a compromise approach that leverages both robot-friendly automation and the kind of human skills that make things taste right. Called Project YORI, which somehow stands for “Yummy Operations Robot Initiative” while also meaning “cooking” in Korean, the system combines a robot-optimized environment with a pair of arms that can operate kitchen tools sort of like a human.

“Instead of trying to mimic how humans cook,” the researchers say, “we approached the problem by thinking how cooking would be accomplished if a robot cooks. Thus the YORI system does not use the typical cooking methods, tools or utensils which are developed for humans.” In addition to a variety of automated cooking systems, the tools that YORI does use are modified to work with a tool changing system, which mostly eliminates the problem of grasping something like a knife well enough that you can precisely and repeatedly exert a substantial amount of force through it, and also helps keep things structured and accessible.

In terms of cooking methods, the system takes advantage of technology when and where it works better than conventional human cooking techniques. For example, in order to tell whether ingredients are fresh or to determine when food is cooked ideally, YORI “utilizes unique chemical sensors,” which I guess are the robot equivalent of a nose and taste buds and arguably would do a more empirical job than some useless recipe metric like “season to taste.”

The advantage of a system like this is versatility. In theory, it’s not as constrained by recipes that you can cram into a system built around automation because of those added robotic capabilities, while also being somewhat practical—or at least, more practical than a robot designed to interact with a lightly modified human kitchen. And it’s actually designed to be practical(ish), in the sense that it’s being developed under a partnership with Woowa Brothers, the company that runs the leading food delivery service in South Korea. It’s obviously still a work in progress—you can see a human hand sneaking in there from time to time. But the approach seems interesting, and I hope that RoMeLa keeps making progress on it, because I’m hungry.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

IROS 2023: 1–5 October 2023, DETROITCLAWAR 2023: 2–4 October 2023, FLORIANOPOLIS, BRAZILROSCon 2023: 18–20 October 2023, NEW ORLEANSHumanoids 2023: 12–14 December 2023, AUSTIN, TEXASCybathlon Challenges: 02 February 2024, ZURICH

Enjoy today’s videos!

Musical dancing is an ubiquitous phenomenon in human society. Providing robots the ability to dance has the potential to make the human/robot coexistence more acceptable. Hence, dancing robots have generated a considerable research interest in the recent years. In this paper, we present a novel formalization of robot dancing as planning and control of optimally timed actions based on beat timings and additional features extracted from the music.

Wow! Okay, all robotics videos definitely need confetti cannons.

[ DFKI ]

What an incredibly relaxing robot video this is.

Except for the tree bit, I mean.

[ Paper ] via [ ASL ]

Skydio has a fancy new drone, but not for you!

Skydio X10, a drone designed for first responders, infrastructure operators, and the U.S. and allied militaries around the world. It has the sensors to capture every detail of the data that matters and the AI-powered autonomy to put those sensors wherever they are needed. It packs more capability and versatility in a smaller and easier-to-use package than has ever existed.

[ Skydio X10 ]

An innovative adaptive bipedal robot with bio-inspired multimodal locomotion control can autonomously adapt its body posture to balance on pipes, surmount obstacles of up to 14 centimeters in height (48 percent of its height), and stably move between horizontal and vertical pipe segments. This cutting-edge robotics technology addresses challenges that out-pipe inspection robots have encountered and can enhance out-pipe inspections within the oil and gas industry.

[ Paper ] via [ VISTEC ]

Thanks, Poramate!

I’m not totally sure how you’d control all of these extra arms in a productive way, but I’m sure they’ll figure it out!

[ KIMLAB ]

The video is one of the tests we tried on the X30 robot dog in the R&D period, to examine the speed of its stair-climbing ability.

[ Deep Robotics ]

They’re calling this the “T-REX” but without a pair of tiny arms. Missed opportunity there.

[ AgileX ]

Drag your mouse to look around within this 360-degree panorama captured by NASA’s Curiosity Mars rover. See the steep slopes, layered buttes, and dark rocks surrounding Curiosity while it was parked below Gediz Vallis Ridge, which formed as a result of violent debris flows that were later eroded by wind into a towering formation. This happened about 3 billion years ago, during one of the last wet periods seen on this part of the Red Planet.

[ NASA ]

I don’t know why you need to drive out into the woods to drop-test your sensor rack. Though maybe the stunning Canadian backwoods scenery is reason enough.

[ NORLab ]

Here’s footage of Reachy in the kitchen, opening the fridge’s door and others, cleaning dirt and coffee stains.

If they ever make Reachy’s face symmetrical, I will refuse to include it in any more Video Fridays. O_o

[ Pollen Robotics ]

Inertial odometry is an attractive solution to the problem of state estimation for agile quadrotor flight. In this work, we propose a learning-based odometry algorithm that uses an inertial measurement unit (IMU) as the only sensor modality for autonomous drone racing tasks. We show that our inertial odometry algorithm is superior to the state-of-the-art filter-based and optimization-based visual-inertial odometry as well as the state-of-the-art learned-inertial odometry in estimating the pose of an autonomous racing drone.

[ UZH RPG ]

Robotic Choreographer is the world’s first dance performance-only robot arm born from the concept of performers that are bigger and faster than humans. This robot has a total length of 3 meters, two rotation axes that rotate infinitely, and an arm rotating up to five times for 1 second.

[ MPlusPlus ] via [ Kazumichi Moriyama ]

This video shows the latest development from Extend Robotics, demonstrating the completion of integration of the Mitsubishi Electric Melfa robot. Key demonstrations include 6 degrees-of-freedom (DoF) precision control with real-time inverse kinematics, dual Kinect camera, low-latency streaming and fusion, and high precision control drawing.

[ Extend Robotics ]

Here’s what’s been going on at the GRASP Lab at UPenn.

[ GRASP Lab ]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

IROS 2023: 1–5 October 2023, DETROITCLAWAR 2023: 2–4 October 2023, FLORIANOPOLIS, BRAZILROSCon 2023: 18–20 October 2023, NEW ORLEANSHumanoids 2023: 12–14 December 2023, AUSTIN, TEX.Cybathlon Challenges: 02 February 2024, ZURICH, SWITZERLAND

Enjoy today’s videos!

Researchers at the University of Washington have developed small robotic devices that can change how they move through the air by “snapping” into a folded position during their descent. When these “microfliers” are dropped from a drone, they use a Miura-ori origami fold to switch from tumbling and dispersing outward through the air to dropping straight to the ground.

And you can make your own! The origami part, anyway:

[ Science Robotics ] via [ UW ]

Thanks, Sarah!

A central question in robotics is how to design a control system for an agile, mobile robot. This paper studies this question systematically, focusing on a challenging setting: autonomous drone racing. We show that a neural network controller trained with reinforcement learning (RL) outperforms optimal control (OC) methods in this setting. Our findings allow us to push an agile drone to its maximum performance, achieving a peak acceleration greater than 12 g and a peak velocity of 108 km/h.

Also, please see our feature story on a related topic.

[ Science Robotics ]

Ascento has a fresh $4.3m in funding to develop its cute two-wheeled robot for less-cute security applications.

[ Ascento ]

Thanks, Miguel!

The evolution of Roomba is here. Introducing three new robots, with three new powerful ways to clean. For over 30 years, we have been on a mission to build robots that help people to do more. Now, we are answering the call from consumers to expand our robot lineup to include more 2 in 1 robot vacuum and mop options.

[ iRobot ]

As the beginning of 2023 Weekly KIMLAB, we want to introduce PAPRAS, Plug-And-Play Robotic Arm System. A series of PAPRAS applications will be posted in coming weeks. If you are interested in details of PAPRAS, please check our paper.

[ Paper ] via [ KIMLAB ]

Gerardo Bledt was the Head of our Locomotion and Controls Team at Apptronik. He tragically passed away this summer. He was a friend, colleague, and force of nature. He was a maestro with robots, and showed all of us what was possible. We dedicate Apollo and our work to Gerardo.

[ Apptronik ]

This robot plays my kind of Jenga.

This teleoperated robot was built by Lingkang Zhang, who tells us that it was inspired by Sanctuary AI’s robot.

[ HRC Model 4 ]

Thanks, Lingkang!

Soft universal grippers are advantageous to safely grasp a wide variety of objects. However, due to their soft material, these grippers have limited lifetimes, especially when operating in unstructured and unfamiliar environments. Our self-healing universal gripper (SHUG) can grasp various objects and recover from substantial realistic damages autonomously. It integrates damage detection, heat-assisted healing, and healing evaluation. Notably, unlike other universal grippers, the entire SHUG can be fully reprocessed and recycled.

[ Paper ] via [ BruBotics ]

Thanks Bram!

How would the movie Barbie look like with robots?

[ Misty ]

Zoox is so classy that if you get in during the day and get out at night, it’ll apparently give you a free jean jacket.

[ Zoox ]

X30, the next generation of industrial inspection quadruped robot is on its way. It is now moving and climbing faster, and it has stronger adaptability to adverse environments with advanced add-ons.

[ DeepRobotics ]

Join us on an incredible journey with Alma, a cutting-edge robot with the potential to revolutionize the lives of people with disabilities. This short documentary takes you behind the scenes of our team’s preparation for the Cybathlon challenge, a unique competition that brings together robotics and human ingenuity to solve real-world challenges.

[ Cybathlon ]

NASA’s Moon rover prototype completed software tests. The VIPER mission is managed by NASA’s Ames Research Center in California’s Silicon Valley and is scheduled to be delivered to Mons Mouton near the South Pole of the Moon in late 2024 by Astrobotic’s Griffin lander as part of the Commercial Lunar Payload Services initiative. VIPER will inform future Artemis landing sites by helping to characterize the lunar environment and help determine locations where water and other resources could be harvested to sustain humans over extended stays. 

[ NASA ]

We are excited to announce Husky Observer, a fully integrated system that enables robotics developers to accelerate inspection solutions. Built on top of the versatile Husky platform, this new configuration will enable robotics developers to build their inspection solutions and fast track their system development.

[ Clearpath ]

Land mines and other unexploded ordnance from wars past and present maim or kill thousands of civilians in dozens of nations every year. Finding and disarming them is a slow, dangerous process. Researchers from the Columbia Climate School’s Lamont-Doherty Earth Observatory and other institutions are trying to harness drones, geophysics and artificial intelligence to make the process faster and safer.

[ Columbia ]

Drones are being used by responders in the terrible Morocco earthquake. This 5 minute describes the 5 ways in which drones are typically used in earthquake response- and 4 ways that they aren’t.

[ CRASAR ]



It’s hard to beat the energy density of chemical fuels. Batteries are quiet and clean and easy to integrate with electrically powered robots, but they’re 20 to 50 times less energy dense than a chemical fuel like methanol or butane. This is fine for most robots that can afford to just carry around a whole bunch of batteries, but as you start looking at robots that are insect-size or smaller, batteries simply don’t scale down very well. And it’s not just the batteries—electric actuators don’t scale down well either, especially if you’re looking for something that can generate a lot of power.

In a paper published 14 September in the journal Science, researchers from Cornell have tackled the small-scale actuation problem with what is essentially a very tiny, very soft internal-combustion engine. Methane vapor and oxygen are injected into a soft combustion chamber, where an itty-bitty li’l spark ignites the mixture. In half a millisecond, the top of the chamber balloons upward like a piston, generating forces of 9.5 newtons through a cycle that can repeat 100 times every second. Put two of these actuators together (driving two legs a piece) and you’ve got an exceptionally powerful soft quadruped robot.

Each of the two actuators powering this robot weighs just 325 milligrams and is about a quarter of the size of a U.S. penny. Part of the reason that they can be so small is that most of the associated components are off-board, including the fuel itself, the system that mixes and delivers the fuel, and the electrical source for the spark generator. But even without all of that stuff, the actuator has a bunch going on that enables it to operate continuously at high cycle frequencies without melting.

A view of the actuator and its component materials along with a diagram of the combustion actuation cycle.Science Robotics

The biggest issue may be that this actuator has to handle actual explosions, meaning that careful design is required to make sure that it doesn’t torch itself every time it goes off. The small combustion volume helps with this, as does the flame-resistant elastomer material and the integrated flame arrestor. Despite the violence inherent to how this actuator works, it’s actually very durable, and the researchers estimate that it can operate continuously for more than 750,000 cycles (8.5 hours at 50 hertz) without any drop in performance.

“What is interesting is just how powerful small-scale combustion is,” says Robert F. Shepherd, who runs the Organic Robotics Lab at Cornell. We covered some of Shepherd’s work on combustion-powered robots nearly a decade ago, with this weird pink jumping thing at IROS 2014. But going small has both challenges and benefits, Shepherd tells us. “We operate in the lower limit of what volumes of gases are combustible. It’s an interesting place for science, and the engineering outcomes are also useful.”

The first of those engineering outcomes is a little insect-scale quadrupedal robot that utilizes two of these soft combustion actuators to power a pair of legs each. The robot is 29 millimeters long and weighs just 1.6 grams, but it can jump a staggering 59 centimeters straight up and walk while carrying 22 times its own weight. For an insect-scale robot, Shepherd says, this is “near insect level performance, jumping extremely high, very quickly, and carrying large loads.”

Cornell University

It’s a little bit hard to see how the quadruped actually walks, since the actuators move so fast. Each actuator controls one side of the robot, with one combustion chamber connected to chambers at each foot with elastomer membranes. An advantage of this actuation system is that since the power source is gas pressure, you can implement that pressure somewhere besides the combustion chamber itself. Firing both actuators together moves the robot forward, while firing one side or the other can rotate the robot, providing some directional control.

“It took a lot of care, iterations, and intelligence to come up with this steerable, insect-scale robot,” Shepherd told us. “Does it have to have legs? No. It could be a speedy slug, or a flapping bee. The amplitudes and frequencies possible with this system allow for all of these possibilities. In fact, the real issue we have is making things move slowly.”

Getting these actuators to slow down a bit is one of the things that the researchers are looking at next. By trading speed for force, the idea is to make robots that can walk as well as run and jump. And of course finding a way to untether these systems is a natural next step. Some of the other stuff that they’re thinking about is pretty wild, as Shepherd tells us: “One idea we want to explore in the future is using aggregates of these small and powerful actuators as large, variable recruitment musculature in large robots. Putting thousands of these actuators in bundles over a rigid endoskeleton could allow for dexterous and fast land-based hybrid robots.” Personally, I’m having trouble even picturing a robot like that, but that’s what’s exciting about it, right? A large robot with muscles powered by thousands of tiny explosions—wow.

Powerful, soft combustion actuators for insect-scale robots, by Cameron A. Aubin, Ronald H. Heisser, Ofek Peretz, Julia Timko, Jacqueline Lo, E. Farrell Helbling, Sadaf Sobhani, Amir D. Gat, and Robert F. Shepherd from Cornell, is published in Science.



This sponsored article is brought to you by NYU Tandon School of Engineering.

To address today’s health challenges, especially in our aging society, we must become more intelligent in our approaches. Clinicians now have access to a range of advanced technologies designed to assist early diagnosis, evaluate prognosis, and enhance patient health outcomes, including telemedicine, medical robots, powered prosthetics, exoskeletons, and AI-powered smart wearables. However, many of these technologies are still in their infancy.

The belief that advancing technology can improve human health is central to research related to medical device technologies. This forms the heart of research for Prof. S. Farokh Atashzar who directs the Medical Robotics and Interactive Intelligent Technologies (MERIIT) Lab at the NYU Tandon School of Engineering.

Atashzar is an Assistant Professor of Electrical and Computer Engineering and Mechanical and Aerospace Engineering at NYU Tandon. He is also a member of NYU WIRELESS, a consortium of researchers dedicated to the next generation of wireless technology, as well as the Center for Urban Science and Progress (CUSP), a center of researchers dedicated to all things related to the future of modern urban life.

Atashzar’s work is dedicated to developing intelligent, interactive robotic, and AI-driven assistive machines that can augment human sensorimotor capabilities and allow our healthcare system to go beyond natural competences and overcome physiological and pathological barriers.

Stroke detection and rehabilitation

Stroke is the leading cause of age-related motor disabilities and is becoming more prevalent in younger populations as well. But while there is a burgeoning marketplace for rehabilitation devices that claim to accelerate recovery, including robotic rehabilitation systems, recommendations for how and when to use them are based mostly on subjective evaluation of the sensorimotor capacities of patients in need.

Atashzar is working in collaboration with John-Ross Rizzo, associate professor of Biomedical Engineering at NYU Tandon and Ilse Melamid Associate Professor of rehabilitation medicine at the NYU School of Medicine and Dr. Ramin Bighamian from the U.S. Food and Drug Administration to design a regulatory science tool (RST) based on data from biomarkers in order to improve the review processes for such devices and how best to use them. The team is designing and validating a robust recovery biomarker enabling a first-ever stroke rehabilitation RST based on exchanges between regions of the central and peripheral nervous systems.

S. Farokh Atashzar is an Assistant Professor of Electrical and Computer Engineering and Mechanical and Aerospace Engineering at New York University Tandon School of Engineering. He is also a member of NYU WIRELESS, a consortium of researchers dedicated to the next generation of wireless technology, as well as the Center for Urban Science and Progress (CUSP), a center of researchers dedicated to all things related to the future of modern urban life, and directs the MERIIT Lab at NYU Tandon.NYU Tandon

In addition, Atashzar is collaborating with Smita Rao, PT, the inaugural Robert S. Salant Endowed Associate Professor of Physical Therapy. Together, they aim to identify AI-driven computational biomarkers for motor control and musculoskeletal damage and to decode the hidden complex synergistic patterns of degraded muscle activation using data collected from surface electromyography (sEMG) and high-density sEMG. In the past few years, this collaborative effort has been exploring the fascinating world of “Nonlinear Functional Muscle Networks” — a new computational window (rooted in Shannon’s information theory) into human motor control and mobility. This synergistic network orchestrates the “music of mobility,” harmonizing the synchrony between muscles to facilitate fluid movement.

But rehabilitation is only one of the research thrusts at MERIIT lab. If you can prevent strokes from happening or reoccurring, you can head off the problem before it happens. For Atashzar, a big clue could be where you least expect it: in your retina.

Atashzar along with NYU Abu Dhabi Assistant Professor Farah Shamout, are working on a project they call “EyeScore,” an AI-powered technology that uses non-invasive scans of the retina to predict the recurrence of stroke in patients. They use optical coherence tomography — a scan of the back of the retina — and track changes over time using advanced deep learning models. The retina, attached directly to the brain through the optic nerve, can be used as a physiological window for changes in the brain itself.

Atashzar and Shamout are currently formulating their hybrid AI model, pinpointing the exact changes that can predict a stroke and recurrence of strokes. The outcome will be able to analyze these images and flag potentially troublesome developments. And since the scans are already in use in optometrist offices, this life-saving technology could be in the hands of medical professionals sooner than expected.

Preventing downturns

Atashzar is utilizing AI algorithms for uses beyond stroke. Like many researchers, his gaze was drawn to the largest medical event in recent history: COVID-19. In the throes of the COVID-19 pandemic, the very bedrock of global healthcare delivery was shaken. COVID-19 patients, susceptible to swift and severe deterioration, presented a serious problem for caregivers.

Especially in the pandemic’s early days, when our grasp of the virus was tenuous at best, predicting patient outcomes posed a formidable challenge. The merest tweaks in admission protocols held the power to dramatically shift patient fates, underscoring the need for vigilant monitoring. As healthcare systems groaned under the pandemic’s weight and contagion fears loomed, outpatient and nursing center residents were steered toward remote symptom tracking via telemedicine. This cautious approach sought to spare them unnecessary hospital exposure, allowing in-person visits only for those in the throes of grave symptoms.

But while much of the pandemic’s research spotlight fell on diagnosing COVID-19, this study took a different avenue: predicting patient deterioration in the future. Existing studies often juggled an array of data inputs, from complex imaging to lab results, but failed to harness data’s temporal aspects. Enter this research, which prioritized simplicity and scalability, leaning on data easily gathered not only within medical walls but also in the comfort of patients’ homes with the use of simple wearables.

S. Farokh Atashzar and colleagues at NYU Tandon are using deep neural network models to assess COVID data and try to predict patient deterioration in the future.

Atashzar, along with his Co-PI of the project Yao Wang, Professor of Biomedical Engineering and Electrical and Computer Engineering at NYU Tandon, used a novel deep neural network model to assess COVID data, leveraging time series data on just three vital signs to foresee COVID-19 patient deterioration for some 37,000 patients. The ultimate prize? A streamlined predictive model capable of aiding clinical decision-making for a wide spectrum of patients. Oxygen levels, heartbeats, and temperatures formed the trio of vital signs under scrutiny, a choice propelled by the ubiquity of wearable tech like smartwatches. A calculated exclusion of certain signs, like blood pressure, followed, due to their incompatibility with these wearables.

The researchers utilized real-world data from NYU Langone Health’s archives spanning January 2020 to September 2022 lent authenticity. Predicting deterioration within timeframes of 3 to 24 hours, the model analyzed vital sign data from the preceding 24 hours. This crystal ball aimed to forecast outcomes ranging from in-hospital mortality to intensive care unit admissions or intubations.

“In a situation where a hospital is overloaded, getting a CT scan for every single patient would be very difficult or impossible, especially in remote areas when the healthcare system is overstretched,” says Atashzar. “So we are minimizing the need for data, while at the same time, maximizing the accuracy for prediction. And that can help with creating better healthcare access in remote areas and in areas with limited healthcare.”

In addition to addressing the pandemic at the micro level (individuals), Atashzar and his team are also working on algorithmic solutions that can assist the healthcare system at the meso and macro level. In another effort related to COVID-19, Atashzar and his team are developing novel probabilistic models that can better predict the spread of disease when taking into account the effects of vaccination and mutation of the virus. Their efforts go beyond the classic small-scale models that were previously used for small epidemics. They are working on these large-scale complex models in order to help governments better prepare for pandemics and mitigate rapid disease spread. Atashzar is drawing inspiration from his active work with control algorithms used in complex networks of robotic systems. His team is now utilizing similar techniques to develop new algorithmic tools for controlling spread in the networked dynamic models of human society.

A state-of-the-art human-machine interface module with wearable controller is one of many multi-modal technologies tested in S. Farokh Atashzar’s MERIIT Lab at NYU Tandon.NYU Tandon

Where minds meet machines

These projects represent only a fraction of Atashzar’s work. In the MERIIT lab, he and his students build cyber-physical systems that augment the functionality of the next-generation medical robotic systems. They delve into haptics and robotics for a wide range of medical applications. Examples include telesurgery and telerobotic rehabilitation, which are built upon the capabilities of next-generation telecommunications. The team is specifically interested in the application of 5G-based tactile internet in medical robotics.

Recently, he received a donation from the Intuitive Foundation: a Da Vinci research kit. This state-of-the-art surgical system will allow his team to explore ways for a surgeon in one location to operate on a patient in another—whether they are in a different city, region, or even continent. While several researchers have investigated this vision in the past decade, Atashzar is specifically concentrating on connecting the power of the surgeon’s mind with the autonomy of surgical robots - promoting discussions on ways to share the surgical autonomy between the intelligence of machines and the mind of surgeons. This approach aims to reduce mental fatigue and cognitive load on surgeons while reintroducing the sense of haptics lost in traditional surgical robotic systems.

Atashzar poses with NYU Tandon’s Da Vinci research kit. This state-of-the-art surgical system will allow his team to explore ways for a surgeon in one location to operate on a patient in another—whether they are in a different city, region, or even continent.NYU Tandon

In a related line of research, the MERIIT lab is also focusing on cutting-edge human-machine interface technologies that enable neuro-to-device capabilities. These technologies have direct applications in exoskeletal devices, next-generation prosthetics, rehabilitation robots, and possibly the upcoming wave of augmented reality systems in our smart and connected society. One common significant challenge of such systems which is focused by the team is predicting the intended actions of the human users through processing signals generated by functional behavior of motor neurons.

By solving this challenge using advanced AI modules in real-time, the team can decode a user’s motor intentions and predict the intended gestures for controlling robots and virtual reality systems in an agile and robust manner. Some practical challenges include ensuring the generalizability, scalability, and robustness of these AI-driven solutions, given the variability of human neurophysiology and heavy reliance of classic models on data. Powered by such predictive models, the team is advancing the complex control of human-centric machines and robots. They are also crafting algorithms that take into account human physiology and biomechanics. This requires conducting transdisciplinary solutions bridging AI and nonlinear control theories.

Atashzar’s work dovetails perfectly with the work of other researchers at NYU Tandon, which prizes interdisciplinary work without the silos of traditional departments.

“Dr. Atashzar shines brightly in the realm of haptics for telerobotic medical procedures, positioning him as a rising star in his research community,” says Katsuo Kurabayashi, the new chair of the Mechanical and Aerospace Engineering department at NYU Tandon. “His pioneering research carries the exciting potential to revolutionize rehabilitation therapy, facilitate the diagnosis of neuromuscular diseases, and elevate the field of surgery. This holds the key to ushering in a new era of sophisticated remote human-machine interactions and leveraging machine learning-driven sensor signal interpretations.”

This commitment to human health, through the embrace of new advances in biosignals, robotics, and rehabilitation, is at the heart of Atashzar’s enduring work, and his unconventional approaches to age-old problem make him a perfect example of the approach to engineering embraced at NYU Tandon.



Today, iRobot is announcing the newest, fanciest, and most expensive Roomba yet. The Roomba Combo j9+ trades a dock for what can only be described as a small indoor robot garage, which includes a robot-emptying vacuum system that can hold two months of dry debris along with a water reservoir that can provide up to 30 days of clean water to refill the robot’s mopping tank. Like all of iRobot’s new flagship products, the Combo j9+ is very expensive at just under US $1,400. But if nothing else, it shows us where iRobot is headed—toward a single home robot that can do everything without you having to even think about it. Almost.

The j9+ (I’m going to stop saying “Combo” every time, but that’s the one I’m talking about) is essentially an upgraded version of the j7+, which was introduced a year ago. It’s a Roomba vacuum that includes an integrated tank for clean water, and on hard floors, the robot can rotate a fabric mopping pad from on top of its head to under its butt to mop up water that it squirts onto the floor underneath itself. On carpet, the mopping pad gets rotated back up, ensuring that your carpet doesn’t get all moppy.

The biggest difference with the j9+ is that rather than having to manually fill the robot’s clean-water tank before every mopping session, you can rely on a dock that includes a huge 3-liter clean water tank that can keep the robot topped off for a month. This also means that the robot can mop more effectively, since it can use more water when it needs to and then return to the dock midcycle to replenish if necessary.

This all does turn the dock into a bit of a monster. It’s a dock in the space-dock sense, not the boat-dock sense—it’s basically a garage for your Roomba, nothing like the low-profile charging docks that Roombas started out with. iRobot is obviously aware of this, so they’ve put some effort into making the dock look nice, and all of the guts can now be accessed from the front, making the top a usable surface.

The Combo j9+ comes with a beefy docking system that stores a month’s worth of clean water.iRobot

iRobot is not the only company offering hybrid vacuuming and mopping robots with beefy docks. But these have come with some pretty significant compromises, like with robots that just lift the mopping pad up when they encounter carpet rather than moving the pad out of the way entirely. This invariably results in a mopping pad dripping dirty water onto your carpet, which is not great. In iRobot’s internal testing, “we’ve seen competitive robots get materially worse,” says iRobot CEO Colin Angle. iRobot is hardly an unbiased party here, but there’s a reason that Roombas tend to be more expensive than their competitors, and iRobot argues that its focus on long-term reliability in the semi-structured environment of the home is what makes its robots worth the money.

Mapping and localization is a good example of this, Angle explains, which is why iRobot relies on vision rather than lasers. “Lasers are the fastest way to create a map, but a geometry-based solution is very brittle to a changing environment. They can’t handle a general rearranging of furniture in a room.” iRobot’s latest Roombas use cameras that look up toward the ceiling of rooms, tracking visual landmarks that don’t change very often: When was the last time you rearranged your ceiling? This allows iRobot to offer map stability that’s generational across robots.

iRobot did experiment with a depth sensor on the 2019 Roomba S9, but that technology hasn’t made it into a Roomba since. “I am currently happy with one single camera,” Angle tells us. “I don’t feel like anything we’re doing on the robot is constrained by not having a 3D sensor. And I don’t yet have arms on the robot; certainly if we’re doing manipulation, depth would become very important, but for the moment in time, I think there’s a lot more you can get out of a monocular camera. 3D is on our road map for when we’re going to do a step-change in functionality that doesn’t exist in the market today.”

So what’s left to automate here? What is the next generation Roomba going to offer that these latest ones don’t, besides maybe arms? The obvious thing is something that other robotic vacuum companies already offer: cleaning the grungy mopping pad by using a pad-washing system within the dock. “It’s a great idea,” says Angle, but iRobot has not been able to come up with a system that can do this to his satisfaction. You have to wash a robot’s mopping pad with something, like some kind of cleaning fluid, and then that used cleaning fluid has to go somewhere. So now you’re talking about yet another fluid reservoir (or two) that the user has to manage plus an even larger dock to hold all of it. “We don’t have a solution,” Angle says, although my assumption is that they’re working hard on something.

The water-related endpoint for floor care robots seems to be plumbing integration; a system that can provide clean water and accept dirty water on-demand. A company called SwitchBot is already attempting to do this with a water docking station that links into undersink plumbing. It’s more functional than elegant, because I can promise you that zero people have a house designed around robot-accessible plumbing, but my guess is that new houses are going to start to get increasingly robot-optimized.

In the meantime, dealing with a dirty mopping pad is done the same way as with the previous model, the j7+: You remove the pad, which I promise is super easy to do, drop it in the laundry, and replace it with a clean one. It means that you have to physically interact with the robot at least once for every mopping cycle, which is something that iRobot is trying really hard to get away from, but it’s really not that big of an ask, all things considered.

One issue I foresee as Roombas get more and more hands-off is that it’ll get harder and harder to convince people to do maintenance on them. Roombas are arguably some of the most rugged robots ever made, considering that they live and work in semi-structured environments supervised by untrained users. But their jobs are based around spinning mechanical components in contact with the ground, and if you have pets or live with someone with long hair, you know what the underbelly of a Roomba can turn into. A happy Roomba is a Roomba that gets its bearings cleaned out from time to time, and back when we all had to empty our Roomba’s dustbin after every cleaning cycle, it was easy to just flip the robot over and do a quick hair extraction. But with Roombas now running unsupervised for weeks or months at a time, I worry for the health of their innards.

While it’s tempting to just focus on the new hardware here, what makes robots actually useful is increasingly dependent on software. The j9+ does things that are obvious in retrospect, like prioritizing what rooms to clean based on historical dirt measurements, doing a deeper cleaning of bathroom floors relative to hardwood floors, and using a new back-and-forth “scrubbing” trajectory. That last thing is actually not new at all; Evolution Robotics’ Mint was mopping that way back in 2010. But since iRobot acquired that company in 2012, we’ll give it a pass. And of course, the j9+ can still recognize and react to 80-something household objects, from wayward socks to electrical cords.

Mopping pad up!iRobot

I asked Angle what frustrates him the most about how people perceive robot vacuums:

“I wish customers didn’t have a honeymoon period where the robot’s ability to live up to expectations wasn’t ignored,” he told me. Angle explains that when consumers first get a robot, for at least the first few weeks, they cut it plenty of slack, frequently taking the blame for the robot getting lost or stuck. “During the early days of living with a robot, people think the robot is so much smarter than it actually is. In fact, if I’m talking to somebody about what they think their robot knows about every room in their house, it’s like, I’m sorry, but even with unlimited resources and time I have no idea how I’d get a Roomba to learn those things.” The problem with this, Angle continues, is that the industry is increasingly focused on optimizing for this honeymoon period, meaning that gimmicky features are given more weight than long-term reliability. For iRobot, which is playing the long game (just ask my Roomba 560 from 2010!) this may put them at a disadvantage in the trigger-happy consumer market.

The Roomba Combo j9+ is available to ship 1 October for $1,399.99. There’s also a noncombo Roomba j9+, which includes a mopping function in the form of a swappable bin with a mopping pad attached and comes with a much smaller dock, for $899.99. If those prices seem excessive, that’s totally reasonable, because again, iRobot’s latest and greatest robots are always at a premium—but all of these new features will eventually trickle down into Roombas that are affordable for the rest of us.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

IROS 2023: 1–5 October 2023, DETROITCLAWAR 2023: 2–4 October 2023, FLORIANOPOLIS, BRAZILROSCon 2023: 18–20 October 2023, NEW ORLEANSHumanoids 2023: 12–14 December 2023, AUSTIN, TEX.Cybathlon Challenges: 02 February 2024, ZURICH

Enjoy today’s videos!

We leverage tensegrity structures as wheels for a mobile robot that can actively change its shape by expanding or collapsing the wheels. Besides the shape-changing capability, using tensegrity as wheels offers several advantages over traditional wheels of similar size, such as shock-absorbing capability without added mass since tensegrity wheels are both lightweight and highly compliant. The robot can also jump onto obstacles up to 300 millimeters high with a bistable mechanism that can gradually store but quickly release energy.

[ Adaptive Robotics Lab ]

Meet GE Aerospace’s Sensiworm (Soft ElectroNics Skin-Innervated Robotic Worm), a highly intelligent, acutely sensitive soft robot that could serve as extra sets of eyes and ears for Aerospace service operators inside the engine. Deploying self-propelling, compliant robots like Sensiworm would give operators virtually unfettered access in the future to perform inspections without having to disassemble the engine.

[ GE ]

Why not Zoidberg?

[ Boston Dynamics ]

Traditional AI methods need several weeks, days, or hours to let a walking robot learn to walk. This becomes impractical. This study overcomes the problem by introducing a novel bio-inspired integrative approach to develop neural locomotion control that enables a stick insect-like walking robot to learn how to walk within 20 seconds! The study not only proposes a solution for neural locomotion control but also enables insights into the neural equipment of the biological template. It also provides guidance for further developing advanced bio-inspired theory and simulations.

[ VISTEC ]

Thanks, Poramate!

At Hello Robotics, we are redefining the way humans and robots interact. Our latest creation, MAKI Pro, embodies our belief in empathic design—a principle that prioritizes the emotional and social dimensions of technology. MAKI Pro offers unique features such as animatronic eyes for enhanced eye contact, an embedded PC, and 17 points of articulation. Its speech capabilities are also powered by ChatGPT, adding an element of interaction that’s more natural. The compact design allows for easy placement on a desktop.

[ Hello Robotics ]

Thanks Tim!

During the RoboNav project, autonomous driving tests were conducted in the Seetaler Alps in Austria. The tracked Mattro Rovo3 robot autonomously navigates to the selected goal within the operational area, considering alternative paths and making real-time decisions to avoid obstacles.

[ RoboNav ] via [ ARTI ]

Thanks Lena!

NASA’s Moon rover prototype completed lunar lander egress tests.

[ NASA ]

In the early days of Hello Robot, Aaron Edsinger and Charlie Kemp created several prototype robots and tested them. This video from November 24, 2017 was taken as Charlie remotely operated a prototype robot in his unoccupied Atlanta home from rural Tennessee to take care of his family’s cat. Charlie remotely operated the robot on November 23, 24, 25, and 26. He successfully set out fresh food and water for the cat, put dirty dishes in the sink, threw away empty cat food cans, and checked the kitty litter.

[ Hello Robot ]

For a robot that looks nothing at all like a bug, this robot really does remind me of a bug.

[ Zarrouk Lab ]

Teaching quadrupedal robots to shove stuff, which actually seems like it might be more useful than it sounds.

[ RaiLab Kaist ]

The KUKA Innovation Award has been held annually since 2014 and is addressed to developers, graduates and research teams from universities or companies. For this year’s award, the applicants were asked to use open interfaces in our newly introduced robot operating system iiQKA and to add their own hardware and software components. Ultimately Team JARVIS from the Merlin Laboratory of the Italian Politecnico di Milano was able to assert itself as winner. With its Plug & Play method for programming collaborative robotics applications, which is fully integrated into the iiQKA ecosystem, it convinced the jury.

[ Kuka ]

Once a year, the FZI Research Center for Information Technology (FZI Forschungszentrum Informatik) offers a practical course for students at the Karlsruhe Institute of Technology (KIT) to learn about Biologically Motivated Robots. During the practical course, student teams develop solutions for a hide-and-seek challenge in which mobile robots (Boston Dynamics Spot, ANYbotics ANYmal, Clearpath Robotics Husky) must autonomously hide and find each other.

[ FZI ]

A couple of IROS 35th Anniversary plenary talks from Kyoto last year, featuring Marc Raibert and Roland Siegwart.

[ IROS ]

Are robots on the verge of becoming human-like and taking over most jobs? When will self-driving cars be cost-effective? What challenges in robotics will be solved by Large Language Models and generative AI?
Although renowned roboticist Ruzena Bajcsy recently retired from Berkeley, she will return to discuss her insights on how robotics research has evolved over the past half-century with five senior colleagues who have combined research experience of over 200 years.

[ Berkeley ]



How do you land on an asteroid? A lot of very talented engineers have thought about it. Putting a robotic spacecraft down safely on a moon or planet is hard enough, with the pull of gravity to keep you humble. But when it comes to an asteroid, where gravity may be a few millionths of what it is on Earth, is “landing” even the right word?

NASA’s OSIRIS-REx mission is due back on Earth on 24 September after a seven-year voyage to sample the regolith of the asteroid 101955 Bennu—and in that case, mission managers decided not even to risk touching down on Bennu’s rocky crust. “We don’t want to deal with the uncertainty of the actual contact with the surface any longer than necessary,” said Mike Moreau, the deputy mission manager, back in 2020. They devised a scheme to poke the asteroid with a long sampling arm; the ship spent more than two years orbiting Bennu and all of 16 seconds touching it.

Maybe landing is a job for a softbot—a shape-shifting articulated spacecraft of the sort that Jay McMahon and colleagues at the University of Colorado in Boulder have been working on for more than six years. You can call them AoES—short for Area-of-Effect Softbots. The renderings of one resemble a water lily.

That’s not entirely by accident. A bit like a floating lily, the softbot has a lot of surface area relative to its mass. So, if there isn’t much gravity to work with, it can maneuver using much smaller forces—such as electro-adhesion, solar radiation, and van der Waals attraction between molecules. (If you’re not familiar with van der Waals forces, think of a gecko sticking to a wall.)

“There are electrostatic forces that will act and are not insignificant in the asteroid environment,” says McMahon. “It’s just a weird place, where gravity is so weak that those forces that exist on Earth, which we basically ignore because they’re so insignificant—you can take advantage of them in interesting ways.”

It’s important to say, before we go further, that space softbots are a long-term idea, on the back burner for now. McMahon’s team got some funding in 2017 from NIAC, the NASA Innovative Advanced Concepts program; more recently they’ve been researching whether they can apply some of their technology to on-orbit servicing of satellites or removal of space junk. McMahon has also been a scientist on other missions, including OSIRIS-REx and DART, which famously crashed into a small asteroid last year to change the asteroid’s orbital path.

A problem with small asteroids is that many of them—perhaps most—aren’t solid boulders. If they are less than 10 kilometers in diameter, the chances are high that they are so-called rubble piles—agglomerations of rock, metal, and perhaps ice that are held together, in part, by the same weak forces AoES probes would use to explore them. Rubble piles are risky for spacecraft: When OSIRIS-REx gently bumped the surface of Bennu with its sampling arm, scientists were surprised to see that it broke right through with minimal resistance, sending a shower of rock and dirt in all directions.

An even softer approach may be in order, if you want to set a robot down on a rubble pile asteroid. If you’re going to explore the asteroid or perhaps mine it—you need a way to approach it, then settle on the surface without making a mess of it. Early missions tried harpoons and thrusters, and had a rough time.

In this rendering, the softbot spreads its limbs to stick to the asteroid while it digs up debris.The University of Colorado Boulder

“You need to find a way to hold yourself down, but you also need to find a way to not sink in if it’s too soft,” says McMahon. “And so that’s where this big-area idea came from.”

McMahon and his team wrote in a 2018 report for NASA that they can envision a softbot, or a fleet of them, flown into orbit around an asteroid by a mother ship. The petals might be made partly of silicone elastomers, flexible material that has been used on previous spacecraft. In early renditions the petals were a large disc; the flower design turned out to be more efficient. When they’re spread out straight (perhaps extending a few meters), they could act as a solar sail of sorts, slowly guiding the softbot to the surface and curling up to cushion the landing if necessary. Then they could change shape to conform to the asteroid’s own, perhaps attracting themselves to it naturally with van der Waals forces, supplemented with a small electrical charge.

The charge need not be very strong; more important is that the petals be large enough that, when spread out over the surface, they cumulatively create a good grip. McMahon and his colleagues suggest the charge could be turned on and off with HASEL (short for Hydraulically Amplified Self-Healing Electrostatic) actuators, perhaps only affecting one part of a petal at a time.

What about actually digging into the asteroid or kicking up rock for the mother ship to recover? The limbs should hold the spacecraft down while a sampling tool does its work. What if your spacecraft lands in a bad place, or you want to move on to another part of the asteroid? Bend the petals and the softbot can crawl along, a little like a caterpillar. If necessary, the spacecraft can slowly “hop” from one spot to another, straightening its petals again as solar sails to steer. Importantly, they operate without using much fuel, which is heavy, limited in quantity, and probably not something you want contaminating the asteroid.

Though the Colorado team was very thorough in designing their softbot concept, there are obviously countless details still to be worked out—issues of guidance, navigation, power, mass and many others, to say nothing of the economics and political maneuvering needed to launch a new technology. AoES vehicles as currently designed may never fly, but ideas from them may find their way into spacecraft of the future. In McMahon’s words, “This concept elegantly overcomes many of the difficulties.”



Yesterday, Clearpath Robotics of Kitchener, Ontario in Canada (and Clearpath’s mobile logistics robot division OTTO Motors) announced that it was being acquired by the Milwaukee-based Rockwell Automation for an undisclosed amount.

The press release (which comes from Rockwell, not Clearpath) is focused exclusively on robotics for industrial applications. That is, on OTTO Motors’ Autonomous Mobile Robots (AMRs) in the context of production logistics. If you take a look at what Rockwell does, this makes sense, because as an automation company, they’re not typically doing what most of us would think of as “robotics” in the sense that the mechanical systems that they automate don’t do the kind of dynamic decision making that (in my opinion) distinguishes robots from machines. So, the OTTO Motors AMRs (and the people at OTTO who get them to autonomously behave themselves) provide an important and forward-looking addition to what Rockwell is able to offer in an industrial context.

That’s all fine and dandy as far as OTTO Motors goes. What worries me, though, is that there’s zero mention of Clearpath’s well-known and much loved family of yellow and black research robots. This includes the Husky UGV, arguably the standard platform for mobile robotics research and development, as well as the slightly less yellow but just as impactful Turtlebot 4, announced barely a year ago in partnership with iRobot and Open Robotics.

With iRobot, Open Robotics, and now Clearpath all getting partially or wholly subsumed (or consumed?) by other companies that have their own priorities, it’s hard not to be concerned about what’s going to happen to these hardware and software platforms (including Turtlebot and ROS) that have provided the foundation for so much robotics research and education. Clearpath in particular has been a pillar of the ROS community since there’s been a ROS community, and it’s unclear how things are going to change going forward.

We’ve reached out to Clearpath to hopefully get a little bit of clarity on all this stuff, and we’ll have an update as soon as we can.



The counterintuitive and sinewy motions of snakes, stingrays, and skydivers represent a strange kind of motion that is notoriously hard to simulate, animate, or anticipate. All three types of locomotion—through sand, sea, and air—represent movement that relies on neither wings nor limbs but rather subtle and sometimes sudden changes of a body’s geometry.

Now researchers from Caltech and the Technical University of Berlin have created a crucial algorithm that can finally put such curiously complex motions into expressible forms. In the short term, the team says they hope to help animators bring such strange creatures to virtual life—while in the longer term enabling new modes of locomotion for roboticists and other technologists designing new ways to make things move.

Motion From Shape Change (SIGGRAPH 2023)

“We spoke to people from Disney—they told us that animating snakes is pretty nasty and a lot of work for them,” says Oliver Gross, a doctoral student in mathematics at the Technical University of Berlin, and the paper’s lead author. “We hope to simplify this a little bit.”

Even if the animator doesn’t know how one shape turns into the next, the algorithm will determine physical movements through space that match the shape change.

The algorithm examines each body as a shape fashioned from vertices—the points that plot out a 3D model’s mesh or skeleton rig, for instance. The algorithm’s goal, then, is to determine the most energy-efficient way that set of vertices can rotate or translate.

What “energy-efficient” actually means depends on the material through which a body is moving. If a body is pushing through a viscous fluid—such a bacterium or a jellyfish swimming through water—the algorithm finds motions that dissipate the least energy into the fluid via to friction, following a fluid mechanics theorem known as Helmholtz’s principle.

On the other hand, if a body is moving through a vacuum or a thin medium like air—an astronaut in freefall, for instance, or a falling cat—that body won’t face nearly as much drag, and its movement is at the mercy of its inertia instead. So, the algorithm instead minimizes a body’s kinetic energy, in accordance with Euler’s principle of least action.

Regardless of the specific physics involved, a user feeds the algorithm a sequence of images. Imagine a sequence of four squiggly shapes created by an animator, each different from the last. Even if the animator doesn’t know how one shape turns into the next, the algorithm will determine physical movements through space that match the shape change. In the process, the algorithm can also account for gravity’s pull and, in viscous media, the effect the fluid has on the shape.

The Berlin-Pasadena group hammered together an early version of the algorithm in around a week, they say, intending to simulate the wriggling of an earthworm. The researchers soon realized, however, they could simulate other life-forms too. They implemented their algorithm within a 3D modeling environment—SideFX’s Houdini—and test-drove it on a menagerie of computerized creatures, ranging in complexity from a 14-vertex piece of pipe to a 160-vertex fish to a 600-vertex underwater robot to a 7100-vertex eel. When the algorithm examined real-world creatures like a sperm cell, a stingray, a jellyfish, a diver, and a falling cats, its output closely matched real-world imagery.

Gross says his group developed the algorithm without any particular use in mind. However, since much of the group’s research is in aid of computer graphics, they’ve begun thinking of applications in that realm.

In the near future, Gross and his colleagues want to build the algorithm out into a full-fledged animation pipeline. To wit, Gross pictures a machine learning model that examines a video of a moving animal and extracts a sequence of 3D meshes. An animator could then feed those meshes into the shape-change algorithm and find a movement that makes them happen.

In a different type of virtual world, Gross also imagines that robot-builders could use the algorithm to understand the limits and capabilities of their machine in the real world. “You could perform initial tests on a computer, if [the robot] can actually perform these desired motions, without having to build a costly prototype,” he says.

The researchers’ algorithm is currently limited to finding shape changes. What it cannot do, but what Gross says he hopes to enable soon, is to take in a designated point A and point B and find a specific course of motion that will bring a creature from start to finish.

The group’s algorithm was recently published in the journal ACM Transactions on Graphics and made available online as a .zip file.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

IROS 2023: 1–5 October 2023, DETROITCLAWAR 2023: 2–4 October 2023, FLORIANOPOLIS, BRAZILROSCon 2023: 18–20 October 2023, NEW ORLEANSHumanoids 2023: 12–14 December 2023, AUSTIN, TEXASCybathlon Challenges: 02 February 2024, ZURICH

Enjoy today’s videos!

Engineers at University of Colorado, Boulder have designed a new tiny robot that can passively change its shape to squeeze through narrow gaps. The machine, named Compliant Legged Articulated Robotic Insect, or CLARI, borrows its inspiration from the squishiness and various shapes of the world of bugs.

[ CU Boulder ]

We’ve got a huge feature on the University of Zurich’s autonomous vision-based racing drones, which you should absolutely read in full after watching this summary video.

[ Nature ] via [ UZH RPG ]

This is just CGI, but this research group has physical robots that are making this happen.

[ Freeform Robotics ]

This video gives a preview of the recent collaboration between Cyberbotics Lab and Injury Biomechanics Research Lab (IBRL) at Ohio State University on legged robot locomotion testing with the linear impactor.

[ Cyberbotics ]

This is our smallest swimming crawling version (SAW design). It is 6 centimeters long! The wave motion is actuated by a single motor. We attached floats, as the motor and battery were too heavy.

[ Zarrouk Lab ]

A pair of Digits approaches a railway line, and you won’t believe what happens next!

[ Agility Robotics ]

Breakfast time with Reachy O_o

[ Pollen Robotics ]

Suction cups are necessary for all kinds of logistics robots, but they’re necessarily pretty fragile. Wouldn’t it be nice to have suction cups that heal themselves?

[ BruBotics ]

Thanks, Bram!

We present a simple approach to in-hand cube reconfiguration. By simplifying planning, control, and perception as much as possible while maintaining robust and general performance, we gain insights into the inherent complexity of in-hand cube reconfiguration. The proposed system outperforms a substantially more complex system for cube reconfiguration based on deep learning and accurate physical simulation, contributing arguments to the discussion about what the most promising approach to general manipulation might be.

[ TU Berlin ]

Our latest augmented-reality developments for command, control, and supervision of autonomous agents in a three-operator/two-robot human-robot team. The views shown are the first-person views of three HoloLens 2 users and one top-down view of a satellite map with all team members visible throughout the entire demonstration.

[ UT ]

ABB robots go to White Castle.

[ ABB ]

In addition to completing tasks quickly and efficiently, agility allows legged robots to move through complex environments that are otherwise difficult to traverse. In “Barkour: Benchmarking Animal-Level Agility With Quadruped Robots,” we introduce the Barkour agility benchmark for quadruped robots, along with a Transformer-based generalist locomotion policy

[ Google Research ]

This week, Geordie Rose (CEO) and Suzanne Gildert (CTO) of Sanctuary AI muse about the idea of a “humanoid olympics” while discussing how humanoid robots and their respective companies can be ranked. They go over potential metrics for evaluating different humanoid robots—as well as what counts as a humanoid, what doesn’t, and why.

[ Sanctuary ]



The drone screams. It’s flying so fast that following it with my camera is hopeless, so I give up and watch in disbelief. The shrieking whine from the four motors of the racing quadrotor Dopplers up and down as the drone twists, turns, and backflips its way through the square plastic gates of the course at a speed that is literally superhuman. I’m cowering behind a safety net, inside a hangar at an airfield just outside of Zurich, along with the drone’s creators from the Robotics and Perception Group at the University of Zurich.

“I don’t even know what I just watched,” says Alex Vanover, as the drone comes to a hovering halt after completing the 75-meter course in 5.3 seconds. “That was beautiful,” Thomas Bitmatta adds. “One day, my dream is to be able to achieve that.” Vanover and Bitmatta are arguably the world’s best drone-racing pilots, multiyear champions of highly competitive international drone-racing circuits. And they’re here to prove that human pilots have not been bested by robots. Yet.

AI Racing FPV Drone Full Send! - University of Zurich youtu.be

Comparing these high-performance quadrotors to the kind of drones that hobbyists use for photography is like comparing a jet fighter to a light aircraft: Racing quadrotors are heavily optimized for speed and agility. A typical racing quadrotor can output 35 newton meters (26 pound-feet) of force, with four motors spinning tribladed propellers at 30,000 rpm. The drone weighs just 870 grams, including a 1,800-milliampere-hour battery that lasts a mere 2 minutes. This extreme power-to-weight ratio allows the drone to accelerate at 4.5 gs, reaching 100 kilometers per hour in less than a second.

The autonomous racing quadrotors have similar specs, but the one we just saw fly doesn’t have a camera because it doesn’t need one. Instead, the hangar has been equipped with a 36-camera infrared tracking system that can localize the drone within millimeters, 400 times every second. By combining the location data with a map of the course, an off-board computer can steer the drone along an optimal trajectory, which would be difficult, if not impossible, for even the best human pilot to match.

These autonomous drones are, in a sense, cheating. The human pilots have access to the single view only from a camera mounted on the drone, along with their knowledge of the course and flying experience. So, it’s really no surprise that US $400,000 worth of sensors and computers can outperform a human pilot. But the reason why these professional drone pilots came to Zurich is to see how they would do in a competition that’s actually fair.

A human-piloted racing drone [red] chases an autonomous vision-based drone [blue] through a gate at over 13 meters per second.Leonard Bauersfeld

Solving Drone RacingBy the Numbers: Autonomous Racing Drones

Frame size:

215 millimeters

Weight:

870 grams

Maximum thrust:

35 newton meters (26 pound-feet)

Flight duration:

2 minutes

Acceleration:

4.5 gs

Top speed:

130+ kilometers per hour

Onboard sensing:

Intel RealSense T265 tracking camera

Onboard computing:

Nvidia Jetson TX2

“We’re trying to make history,” says Davide Scaramuzza, who leads the Robotics and Perception Group at the University of Zurich (UZH). “We want to demonstrate that an AI-powered, vision-based drone can achieve human-level, and maybe even superhuman-level, performance in a drone race.” Using vision is the key here: Scaramuzza has been working on drones that sense the way most people do, relying on cameras to perceive the world around them and making decisions based primarily on that visual data. This is what will make the race fair—human eyes and a human brain versus robotic eyes and a robotic brain, each competitor flying the same racing quadrotors as fast as possible around the same course.

“Drone racing [against humans] is an ideal framework for evaluating the progress of autonomous vision-based robotics,” Scaramuzza explains. “And when you solve drone racing, the applications go much further because this problem can be generalized to other robotics applications, like inspection, delivery, or search and rescue.”

While there are already drones doing these tasks, they tend to fly slowly and carefully. According to Scaramuzza, being able to fly faster can make drones more efficient, improving their flight duration and range and thus their utility. “If you want drones to replace humans at dull, difficult, or dangerous tasks, the drones will have to do things faster or more efficiently than humans. That is what we are working toward—that’s our ambition,” Scaramuzza explains. “There are many hard challenges in robotics. Fast, agile, autonomous flight is one of them.”

Autonomous Navigation

Scaramuzza’s autonomous-drone system, called Swift, starts with a three-dimensional map of the course. The human pilots have access to this map as well, so that they can practice in simulation. The goal of both human and robot-drone pilots is to fly through each gate as quickly as possible, and the best way of doing this is via what’s called a time-optimal trajectory.

Robots have an advantage here because it’s possible (in simulation) to calculate this trajectory for a given course in a way that is provably optimal. But knowing the optimal trajectory gets you only so far. Scaramuzza explains that simulations are never completely accurate, and things that are especially hard to model—including the turbulent aerodynamics of a drone flying through a gate and the flexibility of the drone itself—make it difficult to stick to that optimal trajectory.

While the human-piloted drones [red] are each equipped with an FPV camera, each of the autonomous drones [blue] has an Intel RealSense vision system powered by a Nvidia Jetson TX2 onboard computer. Both sets of drones are also equipped with reflective markers that are tracked by an external camera system. Evan Ackerman

The solution, says Scaramuzza, is to use deep-reinforcement learning. You’re still training your system in simulation, but you’re also tasking your reinforcement-learning algorithm with making continuous adjustments, tuning the system to a specific track in a real-world environment. Some real-world data is collected on the track and added to the simulation, allowing the algorithm to incorporate realistically “noisy” data to better prepare it for flying the actual course. The drone will never fly the most mathematically optimal trajectory this way, but it will fly much faster than it would using a trajectory designed in an entirely simulated environment.

From there, the only thing that remains is to determine how far to push Swift. One of the lead researchers, Elia Kaufmann, quotes Mario Andretti: “If everything seems under control, you’re just not going fast enough.” Finding that edge of control is the only way the autonomous vision-based quadrotors will be able to fly faster than those controlled by humans. “If we had a successful run, we just cranked up the speed again,” Kaufmann says. “And we’d keep doing that until we crashed. Very often, our conditions for going home at the end of the day are either everything has worked, which never happens, or that all the drones are broken.”

Evan Ackerman

Although the autonomous vision-based drones were fast, they were also less robust. Even small errors could lead to crashes from which the autonomous drones could not recover.Regina Sablotny

How the Robots Fly

Once Swift has determined its desired trajectory, it needs to navigate the drone along that trajectory. Whether you’re flying a drone or driving a car, navigation involves two fundamental things: knowing where you are and knowing how to get where you want to go. The autonomous drones have calculated the time-optimal route in advance, but to fly that route, they need a reliable way to determine their own location as well as their velocity and orientation.

To that end, the quadrotor uses an Intel RealSense vision system to identify the corners of the racing gates and other visual features to localize itself on the course. An Nvidia Jetson TX2 module, which includes a GPU, a CPU, and associated hardware, manages all of the image processing and control on board.

Using only vision imposes significant constraints on how the drone flies. For example, while quadrotors are equally capable of flying in any direction, Swift’s camera needs to point forward most of the time. There’s also the issue of motion blur, which occurs when the exposure length of a single frame in the drone’s camera feed is long enough that the drone’s own motion over that time becomes significant. Motion blur is especially problematic when the drone is turning: The high angular velocity results in blurring that essentially renders the drone blind. The roboticists have to plan their flight paths to minimize motion blur, finding a compromise between a time-optimal flight path and one that the drone can fly without crashing.

Davide Scaramuzza [far left], Elia Kaufmann [far right] and other roboticists from the University of Zurich watch a close race.Regina Sablotny

How the Humans Fly

For the human pilots, the challenges are similar. The quadrotors are capable of far better performance than pilots normally take advantage of. Bitmatta estimates that he flies his drone at about 60 percent of its maximum performance. But the biggest limiting factor for the human pilots is the video feed.

People race drones in what’s called first-person view (FPV), using video goggles that display a real-time feed from a camera mounted on the front of the drone. The FPV video systems that the pilots used in Zurich can transmit at 60 interlaced frames per second in relatively poor analog VGA quality. In simulation, drone pilots practice in HD at over 200 frames per second, which makes a substantial difference. “Some of the decisions that we make are based on just four frames of data,” explains Bitmatta. “Higher-quality video, with better frame rates and lower latency, would give us a lot more data to use.” Still, one of the things that impresses the roboticists the most is just how well people perform with the video quality available. It suggests that these pilots develop the ability to perform the equivalent of the robot’s localization and state-estimation algorithms.

It seems as though the human pilots are also attempting to calculate a time-optimal trajectory, Scaramuzza says. “Some pilots have told us that they try to imagine an imaginary line through a course, after several hours of rehearsal. So we speculate that they are actually building a mental map of the environment, and learning to compute an optimal trajectory to follow. It’s very interesting—it seems that both the humans and the machines are reasoning in the same way.”

But in his effort to fly faster, Bitmatta tries to avoid following a predefined trajectory. “With predictive flying, I’m trying to fly to the plan that I have in my head. With reactive flying, I’m looking at what’s in front of me and constantly reacting to my environment.” Predictive flying can be fast in a controlled environment, but if anything unpredictable happens, or if Bitmatta has even a brief lapse in concentration, the drone will have traveled tens of meters before he can react. “Flying reactively from the start can help you to recover from the unexpected,” he says.

Will Humans Have an Edge?

“Human pilots are much more able to generalize, to make decisions on the fly, and to learn from experiences than are the autonomous systems that we currently have,” explains Christian Pfeiffer, a neuroscientist turned roboticist at UZH who studies how human drone pilots do what they do. “Humans have adapted to plan into the future—robots don’t have that long-term vision. I see that as one of the main differences between humans and autonomous systems right now.”

Scaramuzza agrees. “Humans have much more experience, accumulated through years of interacting with the world,” he says. “Their knowledge is so much broader because they’ve been trained across many different situations. At the moment, the problem that we face in the robotics community is that we always need to train an algorithm for each specific task. Humans are still better than any machine because humans can make better decisions in very complex situations and in the presence of imperfect data.”

“I think there’s a lot that we as humans can learn from how these robots fly.” —Thomas Bimatta

This understanding that humans are still far better generalists has placed some significant constraints on the race. The “fairness” is heavily biased in favor of the robots in that the race, while designed to be as equal as possible, is taking place in the only environment in which Swift is likely to have a chance. The roboticists have done their best to minimize unpredictability—there’s no wind inside of the hangar, for example, and the illumination is tightly controlled. “We are using state-of-the-art perception algorithms,” Scaramuzza explains, “but even the best algorithms still have a lot of failure modes because of illumination changes.”

To ensure consistent lighting, almost all of the data for Swift’s training was collected at night, says Kaufmann. “The nice thing about night is that you can control the illumination; you can switch on the lights and you have the same conditions every time. If you fly in the morning, when the sunlight is entering the hangar, all that backlight makes it difficult for the camera to see the gates. We can handle these conditions, but we have to fly at slower speeds. When we push the system to its absolute limits, we sacrifice robustness.”

Race Day

The race starts on a Saturday morning. Sunlight streams through the hangar’s skylights and open doors, and as the human pilots and autonomous drones start to fly test laps around the track, it’s immediately obvious that the vision-based drones are not performing as well as they did the night before. They’re regularly clipping the sides of the gates and spinning out of control, a telltale sign that the vision-based state estimation is being thrown off. The roboticists seem frustrated. The human pilots seem cautiously optimistic.

The winner of the competition will fly the three fastest consecutive laps without crashing. The humans and the robots pursue that goal in essentially the same way, by adjusting the parameters of their flight to find the point at which they’re barely in control. Quadrotors tumble into gates, walls, floors, and ceilings, as the racers push their limits. This is a normal part of drone racing, and there are dozens of replacement drones and staff to fix them when they break.

Professional drone pilot Thomas Bitmatta [left] examines flight paths recorded by the external tracking system. The human pilots felt they could fly better by studying the robots.Evan Ackerman

There will be several different metrics by which to decide whether the humans or the robots are faster. The external localization system used to actively control the autonomous drone last night is being used today for passive tracking, recording times for each segment of the course, each lap of the course, and for each three-lap multidrone race.

As the human pilots get comfortable with the course, their lap times decrease. Ten seconds per lap. Then 8 seconds. Then 6.5 seconds. Hidden behind their FPV headsets, the pilots are concentrating intensely as their shrieking quadrotors whirl through the gates. Swift, meanwhile, is much more consistent, typically clocking lap times below 6 seconds but frequently unable to complete three consecutive laps without crashing. Seeing Swift’s lap times, the human pilots push themselves, and their lap times decrease further. It’s going to be very close.

Zurich Drone Racing: AI vs Human https://rpg.ifi.uzh.ch/

The head-to-head races start, with Swift and a human pilot launching side-by-side at the sound of the starting horn. The human is immediately at a disadvantage, because a person’s reaction time is slow compared to that of a robot: Swift can launch in less than 100 milliseconds, while a human takes about 220 ms to hear a noise and react to it.

UZH’s Elia Kaufmann prepares an autonomous vision-based drone for a race. Since landing gear would only slow racing drones down, they take off from stands, which allows them to launch directly toward the first gate.Evan Ackerman

On the course, the human pilots can almost keep up with Swift: The robot’s best three-lap time is 17.465 seconds, while Bitmatta’s is 18.746 seconds and Vanover manages 17.956 seconds. But in nine head-to-head races with Swift, Vanover wins four, and in seven races, Bitmatta wins three. That’s because Swift doesn’t finish the majority of the time, colliding either with a gate or with its opponent. The human pilots can recover from collisions, even relaunching from the ground if necessary. Swift doesn’t have those skills. The robot is faster, but it’s also less robust.

Zurich Drone Racing: Onboard View https://rpg.ifi.uzh.ch/

Getting Even Faster

Thomas Bitmatta, two-time MultiGP International Open World Cup champion, pilots his drone through the course in FPV (first-person view).Regina Sablotny

In drone racing, crashing is part of the process. Both Swift and the human pilots crashed dozens of drones, which were constantly being repaired.Regina Sablotny

“The absolute performance of the robot—when it’s working, it’s brilliant,” says Bitmatta, when I speak to him at the end of race day. “It’s a little further ahead of us than I thought it would be. It’s still achievable for humans to match it, but the good thing for us at the moment is that it doesn’t look like it’s very adaptable.”

UZH’s Kaufmann doesn’t disagree. “Before the race, we had assumed that consistency was going to be our strength. It turned out not to be.” Making the drone more robust so that it can adapt to different lighting conditions, Kaufmann adds, is mostly a matter of collecting more data. “We can address this by retraining the perception system, and I’m sure we can substantially improve.”Kaufmann believes that under controlled conditions, the potential performance of the autonomous vision-based drones is already well beyond what the human pilots are capable of. Even if this wasn’t conclusively proved through the competition, bringing the human pilots to Zurich and collecting data about how they fly made Kaufmann even more confident in what Swift can do. “We had overestimated the human pilots,” he says. “We were measuring their performance as they were training, and we slowed down a bit to increase our success rate, because we had seen that we could fly slower and still win. Our fastest strategies accelerate the quadrotor at 4.5 gs, but we saw that if we accelerate at only 3.8 gs, we can still achieve a safe win.”

Bitmatta feels that the humans have a lot more potential, too. “The kind of flying we were doing last year was nothing compared with what we’re doing now. Our rate of progress is really fast. And I think there’s a lot that we as humans can learn from how these robots fly.”

Useful Flying Robots

As far as Scaramuzza is aware, the event in Zurich, which was held last summer, was the first time that a fully autonomous mobile robot achieved world-champion performance in a real-world competitive sport. But, he points out, “this is still a research experiment. It’s not a product. We are very far from making something that can work in any environment and any condition.”

Besides making the drones more adaptable to different lighting conditions, the roboticists are teaching Swift to generalize from a known course to a new one, as humans do, and to safely fly around other drones. All of these skills are transferable and will eventually lead to practical applications. “Drone racing is pushing an autonomous system to its absolute limits,” roboticist Christian Pfeiffer says. “It’s not the ultimate goal—it’s a stepping-stone toward building better and more capable autonomous robots.” When one of those robots flies through your window and drops off a package on your coffee table before zipping right out again, these researchers will have earned your thanks.

Scaramuzza is confident that his drones will one day be the champions of the air—not just inside a carefully controlled hangar in Zurich but wherever they can be useful to humanity. “I think ultimately, a machine will be better than any human pilot, especially when consistency and precision are important,” he says. “I don’t think this is controversial. The question is, when? I don’t think it will happen in the next few decades. At the moment, humans are much better with bad data. But this is just a perception problem, and computer vision is making giant steps forward. Eventually, robotics won’t just catch up with humans, it will outperform them.”

Meanwhile, the human pilots are taking this in stride. “Seeing people use racing as a way of learning—I appreciate that,” Bitmatta says. “Part of me is a racer who doesn’t want anything to be faster than I am. And part of me is really excited for where this technology can lead. The possibilities are endless, and this is the start of something that could change the whole world.”

This article appears in the September 2023 print issue as “Superhuman Speed: AI Drones for the Win.”



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

IEEE RO-MAN 2023: 28–31 August 2023, BUSAN, SOUTH KOREAIROS 2023: 1–5 October 2023, DETROITCLAWAR 2023: 2–4 October 2023, FLORIANOPOLIS, BRAZILROSCON 2023: 18–20 October 2023, NEW ORLEANSHUMANOIDS 2023: 12–14 December 2023, AUSTIN, TEXASCYBATHLON CHALLENGES: 02 February 2024, ZURICH

Enjoy today’s videos!

For US $2.7 million, one of these can be yours.

[ Impress ]

Here’s a little bit more IRL footage of Apptronik’s Apollo, which was announced this week.

[ Apptronik ]

TruckBot is an autonomous robot that can unload both truck trailers and shipping containers at a rate of up to 1,000 cases per hour. It reaches up to 52 feet [16 meters] into the truck trailer or shipping container and can handle boxes weighing up to 50 lbs [23 kilograms], including containers with packing complexities and mixed SKU loads.

[ Mujin ]

These high-speed robot hands from the late 1990s and 2000s are still impressive.

[ Namiki Laboratory ]

This is maybe the jauntiest robot I’ve ever seen.

[ UPenn ]

Or maybe this is the jauntiest robot I’ve ever seen.

[ Deep Robotics ]

Turns out, if you make feet into hydrofoil shapes and put a pair of legs into a water current that’s been disturbed by a cylinder, you’ll get a fairly convincing biological walking gait.

[ UMass Amherst ]

Thanks, Julia!

Humans are generally good at whole-body manipulation, but robots struggle with such tasks. To the robot, each spot where the box could touch any point on the carrier’s fingers, arms, and torso represents a contact event that it must reason about. With billions of potential contact events, planning for this task quickly becomes intractable. Now MIT researchers have found a way to simplify this process, known as contact-rich manipulation planning.

Okay, but I want to know more about Mr. Bucket <3.

[ MIT News ]

By collaborating with Dusty on the Stanford University Bridge Project, California Drywall was able to cut layout time in half and fast-track the installation project.

[ Dusty Robotics ]

PILOTs for robotic INspection and maintenance Grounded on advanced intelligent platforms and prototype applications (PILOTING) is an H2020 European project coordinated by CATEC. The variety of inspection and maintenance operations considered within the project requires the use of different robotic systems. A series of robotic vehicles from PILOTING partners has been adapted/developed and integrated within the PILOTING I&M platform.

[ GRVC ]

A NASA flight campaign aims to enable drones to land safely on rooftop hubs called vertiports for future delivery of people and goods. The campaign may also lead to improvements in weather prediction.

[ NASA ]

An unscripted but presumably edited long interview with Robot Sophia, if you’re into that particular kind of theater.

[ Hanson Robotics ]

Thanks, Dan!



Back in January, Apptronik said it was working on a new commercial general-purpose humanoid robot called Apollo. I say “new” because over the past seven or eight years Apptronik has developed more than half a dozen humanoid robots along with a couple of full-body exoskeletons. But as the company told us earlier this year, it has decided that now is absolutely definitely for sure the time for bipedal humanoids to go commercial.

Today, Apptronik is unveiling Apollo. It says the robot is “designed to transform the industrial workforce and beyond in service of improving the human experience.” It will first be used in logistics and manufacturing, but Apptronik promises “endless potential applications long term.” Still, the company must make it happen: It’s a big step from a prototype to a commercial product.

The biped that we saw in January was a prototype for Apollo, but today Apptronik is showing an alpha version of the real thing. The robot is roughly human-size, standing 1.7 meters tall and weighing 73 kilograms, with a maximum payload of 25 kg. It can run for about 4 hours on a swappable battery. The company has two of these robots right now, and it is building four more.

While Apptronik is initially focused on case and tote handling solutions in the logistics and manufacturing industries, Apollo is a general-purpose robot that is designed to work in the real world where development partners will extend Apollo’s solutions far beyond logistics and manufacturing eventually extending into construction, oil and gas, electronics production, retail, home delivery, elder care and countless more. Apollo is the “iPhone” of robots, enabling development partners to expand on Apptronik developed solutions and extend the digital world into the physical world to work alongside people and do the jobs that they don’t want to do.

I’m generally not a huge fan of the “iPhone of robots” analogy, primarily because the iPhone was cost-effective and widely desirable as a multipurpose tool even before developers really got involved with it. Historically, robots have not been successful in this way. It’ll take some time to learn whether Apollo will be able to demonstrate that out-of-the-box versatility, but my guess is that the initial success of Apollo (as with basically every other robot) will depend primarily on what practical applications Apptronik itself will be able to set it up for. Maybe at some point humanoids will be so affordable and easy to use that there will be an open-ended developer market, but we’re nowhere close to that yet.

Pretty much all the humanoid robots entering the market are meant for the handling of standard containers, known as cases and totes. And for good reason: The job is dull and physically taxing, and there aren’t enough people willing to do it. There’s plenty of room for robots like Apollo, provided the cost isn’t too high.

To understand how Apollo can be competitive, we spoke with Apptronik CEO Jeff Cardenas and CTO Nick Paine.

How are you going to make Apollo affordable?

Jeff Cardenas: This isn’t our first humanoid that we’ve built—we’ve done about eight. The approach that we took with our robots early on was to just build the best thing we could, and worry about getting the cost down later. But we would hit a wall each time. A big focus with Apollo was to not do that again. We had to start thinking about cost from the very beginning, and we needed to make sure that the first alpha unit that we build is as close to the gamma unit as possible. A lot of people will wave a wand and say, “There’s going to be millions of humanoids one day, so things like harmonic drives are going to become much cheaper at scale.” But when you actually quote components at really high volumes, you don’t get the price break you think you’ll get. The electronics—the motor drivers with the actuators—60 percent or more of the cost of the system is there.

Nick Paine: We are trying to think about Apollo from a long-term perspective. We wanted to avoid the situation where we’d build a robot just to show that we could do something, but then have to figure out how to swap out expensive high-precision parts for something else while presenting our controls team with an entirely new problem as well.

So the focus is on Apollo’s actuators?

Paine: Apptronik is a little unique in that we’ve built up actuation experience through a range of projects that we’ve worked on—I think we’ve designed around 13 complete systems, so we’ve experienced the full gamut of what type of actuation architectures work well for what scenarios and what applications. Apollo is really a culmination of all that knowledge gathered over many years of iterative learning, optimized for the humanoid use case, and being very intentional about what properties from a first-principles standpoint that we wanted to have at each joint of the robot. That resulted in a combination of linear and rotary actuators throughout the system.

Cardenas: What we’re targeting is affordability, and part of how we get there is with our actuation approach. The new actuators we’re using have about a third fewer components than our previous actuators. They also take about a third of the assembly time. Long term, our road map is really focused around supply chain: How do we get away from single-source vendors and start to leverage components that are much more readily available? We think that’s going to be important for cost and scaling the systems long term.

Can you share some technical details on the actuators?

Paine: Folks can look at the patents when they come out, but I would chalk it up to our teams’ first-principles design experience, and past history of system-level integration.

But it’s not like you have some magical new actuator technology?

Cardenas: We’re not relying on fundamental breakthroughs to reach this threshold of performance. We need to get our robots out into the world, and we’re able to leverage technologies that already exist. And with our experience and a systems sort of thinking we’re putting it together in a novel way.

What does “affordable” mean in the context of a robot like Apollo?

Cardenas: I think long term, a humanoid needs to cost less than US $50,000. They should be comparable to the price of many cars.

Paine: I think actually we could be significantly cheaper than cars, based on the assumption that at scale, the cost of a product typically approaches the cost of its constituent materials. Cars weigh about 1,800 kilograms, and our robot weighs 70 kilograms. That’s 25 times less raw materials. And as Jeff said, we already have a path and a supply chain for very cost-effective actuators. I think that’s a really interesting analysis to think about, and we’re excited to see where it goes.

Some of the videos show Apollo with a five-fingered hand. What’s your perspective on end effectors?

Cardenas: We think that long term, hands will be important for humanoids, although they won’t necessarily have to be five-fingered hands. The end effector is modular. For first applications when we’re picking boxes, we don’t need a five-finger hand for that. And so we’re going to simplify the problem and deploy with a simpler end effector.

Paine: I feel like some folks are trying to do hands because they think it’s cool, or because it shows that their team is capable. The way that I think about it is, humanoids are hard enough as they are—there are a lot of challenges and complexities to figure out. We are a very pragmatic team from an engineering standpoint, and we are very careful about choosing our battles, putting our resources where they’re most valuable. And so for the alpha version of Apollo, we have a modular interface with the wrist. We are not solving the generic five-finger fine dexterity and manipulation problem. But we do think that long term, the best versatile end effector is a hand.

These initial applications that you’re targeting with Apollo don’t seem to be leveraging its bipedal mobility. Why have a robot with legs at all?

Cardenas: One of the things that we’ve learned about legs is that they address the need for reaching the ground and reaching up high. If you try to solve that problem with wheels, then you end up with a really big base, because it has to be statically stable. The customers that we’re working with are really interested in this idea of retrofitability. They don’t want to have to make workspace changes. The workstations are really narrow—they’re designed around the human form, and so we think legs are going to be the way to get there.

Legs are an elegant solution to achieving a lightweight system that can operate at large vertical workspaces in small footprints. —Nick Paine, Apptronik CTO

Can Apollo safely fall over and get back up?

Paine: A very important requirement is that Apollo needs to be able to fall over and not break, and that drives some key actuation requirements. One of the unique things with Apollo is that not only is it well suited for OSHA-level manipulation of payloads, but it’s also well suited for robustly handling impacts with the environment. And from a maintenance standpoint, two bolts is all you need to remove to swap out an actuator.

Cardenas says that Apptronik has more than 10 pilots planned with case picking as the initial application. The rest of this year will be focused on in-house demonstrations with the Apollo alpha units, with field pilots planned for next year with production robots. Full commercial release is planned for the end of 2024. It’s certainly an aggressive timeline, but Apptronik is confident in its approach. “The beauty of robotics is in showing versus telling,” Cardenas says. “That’s what we’re trying to do with this launch.”



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

IEEE RO-MAN 2023: 28–31 August 2023, BUSAN, SOUTH KOREAIROS 2023: 1–5 October 2023, DETROITCLAWAR 2023: 2–4 October 2023, FLORIANOPOLIS, BRAZILROSCon 2023: 18–20 October 2023, NEW ORLEANSHumanoids 2023: 12–14 December 2023, AUSTIN, TEXAS

Enjoy today’s videos!

Loco-manipulation planning skills are pivotal for expanding the utility of robots in everyday environments. Here, we propose a minimally guided framework that automatically discovers whole-body trajectories jointly with contact schedules for solving general loco-manipulation tasks in premodeled environments. We showcase emergent behaviors for a quadrupedal mobile manipulator exploiting both prehensile and nonprehensile interactions to perform real-world tasks such as opening/closing heavy dishwashers and traversing spring-loaded doors.

I swear the cuteness of a quadrupedsusing a lil foot to hold open a spring-loaded door just never gets old.

[ Science Robotics ] via [ RSL ]

In 2019, Susie Sensmeier became one of the first customers in the United States to receive a commercial drone delivery. She was hooked. Four years later, Susie and her husband, Paul, have had over 1,200 orders delivered to their front yard in Christiansburg, Va., via Wing’s drone delivery service. We believe this sets a world record.

[ Wing ]

At the RoboCup 2023, one challenge was the Dynamic Ball Handling Challenge. The defending team used a static image with the sole purpose of intercepting the ball. The attacking team’s goal was to do at least two passes followed by a goal. This procedure was repeated three times on three different days and fields.

[ B-Human ]

When it comes to space, humans and robots go way back. We rely heavily on our mechanical friends to perform tasks that are too dangerous, difficult, or out of reach for us humans. We’re even working on a new generation of robots that will help us explore in advanced and novel ways.

[ NASA ]

The KUKA Innovation Award has been held annually since 2014 and is addressed to developers, graduates, and research teams from universities and companies. For this year’s award, the applicants were asked to use open interfaces in our newly introduced robot operating system iiQKA and to add their own hardware and software components. Team SPIRIT from the Institute of Robotics and Mechatronics at the German Aerospace Center worked on the automation of maintenance and inspection tasks in the oil and gas industry.

[ Kuka ]

We present tasks of traversing challenging terrain that requires discovering a contact schedule, navigating non-convex obstacles, and coordinating many degrees of freedom. Our hybrid planner has been applied to three different robots: a quadruped, a wheeled quadruped, and a legged excavator. We validate our hybrid locomotion planner in the real world and simulation, generating behaviors we could not achieve with previous methods.

[ ETHZ ]

Giving drones hummingbird performance with no GPS, no motion capture, no cloud computing, and no prior map.

[ Ajna ]

In this video we introduce a new option for our Ridgeback Omnidirectional Indoor Mobile Platform, available through Clearpath Robotics integration services. This height-adjustable lift column is programmable through ROS and configure with MoveIt!

[ Clearpath ]

How do robots understand their surroundings? How do they decide what to pick up next? And how do they learn how to pick it up?

[ Covariant ]

Our Phoenix robots can successfully and accurately perform tasks that require the dexterity of two hands simultaneously, also known as bimanual object manipulation!

[ Sanctuary AI ]

By this point, I should be able to just type O_o and you’ll know it’s a Reachy video, right?

[ Pollen Robotics ]



This article is part of our exclusive IEEE Journal Watch series in partnership with IEEE Xplore.

Trees play clear and crucial roles in many ecosystems—whether they’re providing shade on a sunny day, serving as a home for a family of owls, or cycling carbon dioxide out of the air. But forests are diminishing through exploitative logging practices as well as the process of desertification, which turns grassland and shrubland arid.

Restoring biodiversity to these areas through growing new trees is a crucial first step, but planting seedlings in these arid environments can be both time and labor intensive. To address this problem, a team of researchers at Firat University and Adiyaman University—located in Elazig, Turkey, and Adiyaman, Turkey, respectively—has developed a concept design of a robot to drill holes and plant seedlings for up to 24 hours at a time.

Andrea Botta is an engineering professor at the Polytechnic University of Turin in Italy who has researched the use of agricultural robots. Botta, who did not contribute to this research, says that planting robots like this could fill an important gap in communities with smaller labor forces.

“Robots are very good at doing repetitive tasks like planting several trees [and] can also work for an extended period of time,” Botta says. “In a community with a significant lack of workers, complete automation is a fair approach.”

Tree-planting robots are not necessarily a new concept and can take on a variety of shapes and sizes. In their work, the research team in Turkey explored different existing tree-planting robots, including quadrupeds, ones with caterpillar belts, and wheeled robots. These robots, designed by groups such as students at University of Victoria in Canada or engineers at Chinese tech giant Huawei, either ran on steam, electric batteries, or diesel. Several of the robots were even designed to carry more than 300 seedlings on their back at a time, cutting down on time spent going between a greenhouse and the planting site.

With these designs in mind, the research team in Turkey developed a 3D model of a robotic planter that had four wheels, a steel frame, and a back-mounted hydraulic drill. Using diesel power, this 136-kilogram robot is designed to drive 300 centimeters at a time before drilling a 50-cm hole for each seedling. In future iterations, the team plans to incorporate autonomous sensing.

“As a future study, we plan to manufacture the robot we designed and develop autonomous motion algorithms,” the team write in their paper (The researchers declined to comment for this story). “The rapid development of sensor technology in recent years and the acceleration of research on the fusion of multisensor data have paved the way for robots to gain environmental perception and autonomous movement capability.”

In particular, the team plan to mount environmental sensing units—including cameras and ultrasonic sensors—to a gimbal on the robot’s back. This sensor data will then feed into motion- and object-detection algorithms the team plans to develop.

However, adding more autonomy to these types of robots doesn’t necessarily mean they should be given free rein over tree planting, Botta says. Especially in situations where they may be working alongside human workers.

“Human-robot collaboration is a very trendy topic and should be carefully designed depending on the case,” he says. “Introducing automation to a job may also introduce problems too, so it should be applied appropriately considering the local communities and scenario. For example—if a large workforce is available, the robot should be designed considering a strong synergy with the workers to ease their burden while avoiding harming the community.”

In future iterations of this design, Botta also hopes that consideration will be paid to diversity in the planting environment and planting type as well. For example, adding suspension to the robot for better all-terrain driving or adding solar panels for supplemental power and to allow the robot to operate where other fuel sources may not be readily available. Adding a renewable power option to the robots could also help ensure that they remain carbon neutral while planting.

Considering how a robot could handle multiple types of plants would also be important, Botta says.

“It seems that most—if not all—of the robotics solutions create tree farms, but probably what we need is planting actual forests with a significant biodiversity,” he says.

The work was presented in May at the 14th International Conference on Mechanical and Intelligent Manufacturing Technologies in Cape Town, South Africa.



Skydio, maker of the most autonomous consumer drone there ever was, has announced that it is getting out of the consumer drone space completely, as of this past week. The company will be focusing on “over 1,500 enterprise and public sector customers” that are, to be fair, doing many useful and important things with Skydio drones rather than just shooting videos of themselves like the rest of us. Sigh.

By a lot of metrics, the Skydio 2 is (was?) the most capable drone that it was possible for a consumer to get. Lots of drones advertise obstacle avoidance, but in my experience, none of them came anywhere close to the way that Skydio’s drones are able to effortlessly slide around even complex obstacles while reliably tracking you at speed. Being able to (almost) completely trust the drone to fly itself and film you while you ignored it was a magical experience that I don’t think any other consumer drone can offer. It’s rare that robots can operate truly autonomously in unstructured environments, and the Skydio 2 may have been the first robot to bring that to the consumer space. This capability blew my mind even as a very early prototype in 2016, and it still does.

But Skydio does not exist solely to blow my mind, which is unfortunate for me but probably healthy for them. Instead, the company is focusing more on the public sector, the military, and business customers, which have been using Skydio drones in all kinds of public safety and inspection applications. In addition to its technology, Skydio has an edge in that it’s one of just a handful of domestic drone producers approved by the DoD.

The impact we’re having with our enterprise and public sector customers has become so compelling that it demands nothing less than our full focus and attention. As a result, I have made the very difficult decision to sunset our consumer business in order to put everything we’ve got into serving our enterprise and public sector customers. —Adam Bry, Skydio CEO

So as of now, you can no longer buy a consumer Skydio 2 from Skydio.

The less terrible news is that Skydio has promised to continue to support existing consumer drone customers:

We stand by all warranty terms, Skydio Care, and will continue vehicle repairs. Additionally, we will retain inventory of accessories for as long as we can to support the need for replacement batteries, propellers, charging cables, etc.

And since the Skydio 2+ is still being produced for sale for enterprise customers, it seems like those parts and accessories may be available longer than they would be otherwise.

If you don’t have a Skydio 2 consumer drone and you desperately want one, there aren’t a lot of good options. Last time we checked, the Skydio 2+ enterprise kit was US $5,000. Most of that value is in software and support, since the consumer edition of the Skydio 2+ with similar accessories was closer to US $2,400. That leaves buying a Skydio 2 used, or at least, buying one from a source other than Skydio—at the moment, there are a couple of Skydio 2 drones on eBay, one of which is being advertised as new.

Lastly, there is some very tenuous suggestion that Skydio may not be done with the consumer drone space forever. In a FA

Q on the company’s website about the change in strategy, Skydio says they do not explicitly rule out a future consumer drone. Rather, they saying only that “we are not able to share any updates about our future product roadmap.” So I’m just going to cross my fingers and assume that a Skydio 3 may still one day be on the way.

Pages