Feed aggregator

In the last decades, the increasing complexity of the fusion of proprioceptive and exteroceptive sensors with Global Navigation Satellite System (GNSS) has motivated the exploration of Artificial Intelligence related strategies for the implementation of the navigation filters. In order to meet the strict requirements of accuracy and precision for Intelligent Transportation Systems (ITS) and Robotics, Bayesian inference algorithms are at the basis of current Positioning, Navigation, and Timing (PNT). Some scientific and technical contributions resort to Sequential Importance Resampling (SIR) Particle Filters (PF) to overcome the theoretical weaknesses of the more popular and efficient Kalman Filters (KFs) when the application relies on non-linear measurements models and non-Gaussian measurements errors. However, due to its higher computational burden, SIR PF is generally discarded. This paper presents a methodology named Multiple Weighting (MW) that reduces the computational burden of PF by considering the mutual information provided by the input measurements about the unknown state. An assessment of the proposed scheme is shown through an application to standalone GNSS estimation as a baseline of more complex multi-sensors, integrated solutions. By relying on the a-priori knowledge of the relationship between states and measurements, a change in the conventional PF routine allows performing a more efficient sampling of the posterior distribution. Results show that the proposed strategy can achieve any desired accuracy with a considerable reduction in the number of particles. Given a fixed and reasonable available computational effort, the proposed scheme allows for an accuracy improvement of the state estimate in the range of 20–40%.

In recent decades, unmanned aerial vehicles (UAVs) have gained considerable popularity in the agricultural sector, in which UAV-based actuation is used to spray pesticides and release biological control agents. A key challenge in such UAV-based actuation is to account for wind speed and UAV flight parameters to maximize precision-delivery of pesticides and biological control agents. This paper describes a data-driven framework to predict density distribution patterns of vermiculite dispensed from a hovering UAV as a function of UAV’s movement state, wind condition, and dispenser setting. The model, derived by our proposed learning algorithm, is able to accurately predict the vermiculite distribution pattern evaluated in terms of both training and test data. Our framework and algorithm can be easily translated to other precision pest management problems with different UAVs and dispensers and for difference pesticides and crops. Moreover, our model, due to its simple analytical form, can be incorporated into the design of a controller that can optimize autonomous UAV delivery of desired amount of predatory mites to multiple target locations.

Liquid crystal elastomers (LCEs) have shown great potential as soft actuating materials in soft robots, with large actuation strain and fast response speed. However, to achieve the unique features of actuation, the liquid crystal mesogens should be well aligned and permanently fixed by polymer networks, limiting their practical applications. The recent progress in the 3D printing technologies of LCEs overcame the shortcomings in conventional processing techniques. In this study, the relationship between the 3D printing parameters and the actuation performance of LCEs is studied in detail. Furthermore, a type of inchworm-inspired crawling soft robot based on a liquid crystal elastomeric actuator is demonstrated, coupled with tilted fish-scale-like microstructures with anisotropic friction as the foot for moving forwards. In addition, the anisotropic friction of inclined scales with different angles is measured to demonstrate the performance of anisotropic friction. Lastly, the kinematic performance of the inchworm-inspired robot is tested on different surfaces.



Editor’s note: Last week, Amazon announced that it was acquiring iRobot for $1.7 billion, prompting questions about how iRobot’s camera-equipped robot vacuums will protect the data that they collect about your home. In September of 2017, we spoke with iRobot CEO Colin Angle about iRobot’s approach to data privacy, directly addressing many similar concerns. “The views expressed in the Q&A from 2017 remain true,” iRobot told us. “Over the past several years, iRobot has continued to do more to strengthen, and clearly define, its stance on privacy and security. It’s important to note that iRobot takes product security and customer privacy very seriously. We know our customers invite us into their most personal spaces—their homes—because they trust that our products will help them do more. We take that trust seriously."

The article from 7 September 2017 follows:

About a month ago, iRobot CEO Colin Angle mentioned something about sharing Roomba mapping data in an interview with Reuters. It got turned into a data privacy kerfuffle in a way that iRobot did not intend and (probably) did not deserve, as evidenced by their immediate clarification that iRobot will not sell your data or share it without your consent.

Data privacy is important, of course, especially for devices that live in your home with you. But as robots get more capable, the amount of data that they collect will increase, and sharing that data in a useful, thoughtful, and considerate way could make smart homes way smarter. To understand how iRobot is going to make this happen, we spoke with Angle about keeping your data safe, integrating robots with the future smart home, and robots that can get you a beer.

Colin Angle on . . .

How Roomba Can Help Your House Understand Itself

Why Robots Capture People’s Imagination

iRobot Is Not Going to Sell Your Data

Are Robots With Cameras a Good Idea?

“The Home Itself Is Turning Into a Robot"

The Beer-Fetching Robots Are Coming

Were you expecting such a strong reaction on data privacy when you spoke with Reuters?

Colin Angle: We were all a little surprised, but it gave us an opportunity to talk a little more explicitly about our plans on that front. In order for your house to work smarter, the house needs to understand itself. If you want to be able to say, “Turn on the lights in the kitchen,” then the home needs to be able to understand what the kitchen is, and what lights are in the kitchen. And if you want that to work with a third-party device, you need a trusted, customer-in-control mechanism to allow that to happen. So, it’s not about selling data, it’s about usefully linking together different devices to make your home actually smart. The interesting part is that the limiting factor in making your home intelligent isn’t AI, it’s context. And that’s what I was talking about to Reuters.

What kinds of data can my Roomba 980 collect about my home?

Angle: The robot uses its sensors [including its camera] to understand where it is and create visual landmarks, things that are visually distinctive that it can recognize again. As the robot explores the home as a vacuum, it knows where it is relative to where it started, and it creates a 2D map of the home. None of the images ever leave the robot; the only map information that leaves the robot would be if the customer says, “I would like to see where the robot went,” and then the map is processed into a prettier form and sent up to the cloud and to your phone. If you don’t want to see it, it stays on the robot and never leaves the robot.

Do you think that there’s a perception that these maps contain much more private information about our homes than they really do?

Angle: I think that if you look at [the map], you know exactly what it is. In the future, we’d like it to have more detail, so that you could give more sophisticated commands to the robot, from “Could you vacuum my kitchen?” in which case the robot needs to know where the kitchen is, to [in the future], “Go to the kitchen and get me a beer.” In that case, the robot needs to know where the kitchen is, where the refrigerator is, what a beer is, and how to grab it. We’re at a very benign point right now, and we’re trying to establish a foundation of trust with our customers about how they have control over their data. Over time, when we want our homes to be smarter, you’ll be able to allow your robot to better understand your home, so it can do things that you would like it to do, in a trusted fashion.

“Robots are viewed as creatures in the home. That’s both exciting and a little scary at the same time, because people anthropomorphize and attribute much more intelligence to them than they do to a smart speaker.”

Fundamentally, would the type of information that this sort of robot would be sharing with third parties be any more invasive than an Amazon Echo or Google Home?

Angle: Robots have this inherent explosive bit of interest, because they’re viewed as creatures in your home. That’s both exciting and a little scary at the same time, because people anthropomorphize and attribute much more intelligence to them than they do to a smart speaker. The amount of information that one of these robots collect is, in many ways, much less, but because it moves, it really captures people’s imagination.

Why do you think people seem to be more concerned about the idea of robots sharing their data?

Angle: I think it’s the idea that you’d have a “robot spy” in your home. Your home is your sanctuary, and people rightfully want their privacy. If we have something gathering data in their home, we’re beyond the point where a company can exploit their customers by stealthily gathering data and selling it to other people. The things you buy and place in your home are there to benefit you, not some third party. That was the fear that was unleashed by this idea of gathering and selling data unbeknownst to the customer. At iRobot, we’ve said, “Look, we’re not going to do this, we’re not going to sell your data.” We don’t even remember your map unless you tell us we can. Our very explicit strategy is building this trusted relationship with our customers, so that they feel good about the benefits that Roomba has.

How could robots like Roomba eventually come to understand more about our homes to enable more sophisticated functionality?

Angle: We’re in the land of R&D here, not Roomba products, but certainly there exists object-recognition technology that can determine what a refrigerator is, what a television is, what a table is. It would be pretty straightforward to say, if the room contains a refrigerator and an oven, it’s probably the kitchen. If a room has a bed, it’s probably a bedroom. You’ll never be 100 percent right, but rooms have purposes, and we’re certainly on a path where just by observing, a robot could identify a room.

What else do you think a robot like a Roomba could ultimately understand about your home?

Angle: We’re working on a number of things, some of which we’re happy to talk about and some of which less so at this point in time. But, why should your thermostat be on the wall? Why is one convenient place on the wall of one room the right place to measure temperature from, as opposed to where you like to sit? When you get into home control, your sensor location can be critically tied to the operation of your home. The opportunity to have the robot carry sensors with it around the home would allow the expansion from a point reading to a 2D or 3D map of those readings. As a result, the customer has a lot more control over [for example] how loud the stereo system is at a specific point, or what the temperature is at a specific point. You could also detect anomalies in the home if things are not working the way the customer would like them to work. Those are some simple examples of why moving a sensor around would matter.

“There’s a pretty sizeable market for webcams in the home. People are interested in security and intruder detection, and also in how their pets are doing. But invariably, what you want to see is not what your camera is pointing at. That’s something where a robot makes a lot more sense.”

Another good example would be, there’s actually a pretty sizeable market for webcams in the home. People are interested in security and intruder detection, and also in how their pets are doing. But invariably, what you want to see is not what your camera is currently pointing at. Some people fill up their homes with cameras, or you put a camera on a robot, and it moves to where you want to look. That’s something where a robot makes a lot more sense, and it’s interesting, if I want to have privacy in our home and yet still have a camera I can use, it’s actually a great idea to put one on a robot, because when the robot isn’t in the room with you, it can’t see you. So, the metaphor is a lot closer to if you had a butler in your home— when they’re not around, you can have your privacy back. This is a metaphor that I think works really well as we try to architect smart homes that are both aware of themselves, and yet afford privacy.

So a mobile robot equipped with sensors and mapping technology to be able to understand your home in this way could act like a smart home manager?

Angle: A spatial information organizer. There’s a hub with a chunk of software that controls everything, and that’s not necessarily the same thing as what the robot would do. What Apple and Amazon and various smart home companies are doing are trying to build hubs where everything connects to them, but in order for these hubs to be actually smart, they need what I call spatial content: They need to understand what’s a room, and what’s in a room for the entire home. Ultimately, the home itself is turning into a robot, and if the robot’s not aware of itself, it can’t do the right things.

“A robot needs to understand what’s a room, and what’s in a room, for the entire home. Ultimately, the home itself is turning into a robot, and if the robot’s not aware of itself, it can’t do the right things.”

So, if you wanted to walk into a room and have the lights turn on and the heat come up, and if you started watching television and then left the room and wanted the television to turn off in the room you’d left and turn on in the room you’d gone to, all of those types of experiences where the home is seamlessly reacting to you require an understanding of rooms and what’s in each room. You can brute force that with lots of cameras and custom programming for your home, but I don’t believe that installations like this can be successful or scale. The solution where you own a Roomba anyway, and it just gives you all this information enabling your home to be smart, that’s an exciting vision of how we’re actually going to get smart homes.

What’s your current feeling about mobile manipulators in the home?

Angle: We’re getting there. In order for manipulation in the home to make sense, you need to know where you are, right? What’s the point of being able to get something if you don’t know where it is. So this idea that we need these maps that have information embedded in them about where stuff is and the ability to actually segment objects—there’s a hierarchy of understanding. You need to know where a room is as a first step. You need to identify where objects are—that’s recognition of larger objects. Then you need to be able to open a door, say, and now you’re processing larger objects to find handles that you can reach out and grab. The ability to do all of these things exists in research labs, and to an increasing degree in manufacturing facilities. We’re past the land of invention of the proof of principle, and into the land of, could we reduce this to a consumer price point that would make sense to people in the home. We’re well on the way—we will definitely see this kind of robot in, I would say, five to 10 years, we’ll have robots that can go and get you a beer. I don’t think it’s going to be a lot shorter than that, because we have a few steps to go, but it’s less invention and more engineering.

We should note that we spoke with Angle just before Neato announced their new D7 robot vacuum, which adds persistent, actionable maps, arguably the first step towards a better understanding of the home. Since Roombas already have the ability to make a similar sort of map, based on the sorts of things that Angle spoke about in our interview we’re expecting to see iRobot add a substantial amount of intelligence and functionality to the Roomba in the very near future.



Editor’s note: Last week, Amazon announced that it was acquiring iRobot for $1.7 billion, prompting questions about how iRobot’s camera-equipped robot vacuums will protect the data that they collect about your home. In September of 2017, we spoke with iRobot CEO Colin Angle about iRobot’s approach to data privacy, directly addressing many similar concerns. “The views expressed in the Q&A from 2017 remain true,” iRobot told us. “Over the past several years, iRobot has continued to do more to strengthen, and clearly define, its stance on privacy and security. It’s important to note that iRobot takes product security and customer privacy very seriously. We know our customers invite us into their most personal spaces—their homes—because they trust that our products will help them do more. We take that trust seriously."

The article from 7 September 2017 follows:

About a month ago, iRobot CEO Colin Angle mentioned something about sharing Roomba mapping data in an interview with Reuters. It got turned into a data privacy kerfuffle in a way that iRobot did not intend and (probably) did not deserve, as evidenced by their immediate clarification that iRobot will not sell your data or share it without your consent.

Data privacy is important, of course, especially for devices that live in your home with you. But as robots get more capable, the amount of data that they collect will increase, and sharing that data in a useful, thoughtful, and considerate way could make smart homes way smarter. To understand how iRobot is going to make this happen, we spoke with Angle about keeping your data safe, integrating robots with the future smart home, and robots that can get you a beer.

Colin Angle on . . .

How Roomba Can Help Your House Understand Itself

Why Robots Capture People’s Imagination

iRobot Is Not Going to Sell Your Data

Are Robots With Cameras a Good Idea?

“The Home Itself Is Turning Into a Robot"

The Beer-Fetching Robots Are Coming

Were you expecting such a strong reaction on data privacy when you spoke with Reuters?

Colin Angle: We were all a little surprised, but it gave us an opportunity to talk a little more explicitly about our plans on that front. In order for your house to work smarter, the house needs to understand itself. If you want to be able to say, “Turn on the lights in the kitchen,” then the home needs to be able to understand what the kitchen is, and what lights are in the kitchen. And if you want that to work with a third-party device, you need a trusted, customer-in-control mechanism to allow that to happen. So, it’s not about selling data, it’s about usefully linking together different devices to make your home actually smart. The interesting part is that the limiting factor in making your home intelligent isn’t AI, it’s context. And that’s what I was talking about to Reuters.

What kinds of data can my Roomba 980 collect about my home?

Angle: The robot uses its sensors [including its camera] to understand where it is and create visual landmarks, things that are visually distinctive that it can recognize again. As the robot explores the home as a vacuum, it knows where it is relative to where it started, and it creates a 2D map of the home. None of the images ever leave the robot; the only map information that leaves the robot would be if the customer says, “I would like to see where the robot went,” and then the map is processed into a prettier form and sent up to the cloud and to your phone. If you don’t want to see it, it stays on the robot and never leaves the robot.

Do you think that there’s a perception that these maps contain much more private information about our homes than they really do?

Angle: I think that if you look at [the map], you know exactly what it is. In the future, we’d like it to have more detail, so that you could give more sophisticated commands to the robot, from “Could you vacuum my kitchen?” in which case the robot needs to know where the kitchen is, to [in the future], “Go to the kitchen and get me a beer.” In that case, the robot needs to know where the kitchen is, where the refrigerator is, what a beer is, and how to grab it. We’re at a very benign point right now, and we’re trying to establish a foundation of trust with our customers about how they have control over their data. Over time, when we want our homes to be smarter, you’ll be able to allow your robot to better understand your home, so it can do things that you would like it to do, in a trusted fashion.

“Robots are viewed as creatures in the home. That’s both exciting and a little scary at the same time, because people anthropomorphize and attribute much more intelligence to them than they do to a smart speaker.”

Fundamentally, would the type of information that this sort of robot would be sharing with third parties be any more invasive than an Amazon Echo or Google Home?

Angle: Robots have this inherent explosive bit of interest, because they’re viewed as creatures in your home. That’s both exciting and a little scary at the same time, because people anthropomorphize and attribute much more intelligence to them than they do to a smart speaker. The amount of information that one of these robots collect is, in many ways, much less, but because it moves, it really captures people’s imagination.

Why do you think people seem to be more concerned about the idea of robots sharing their data?

Angle: I think it’s the idea that you’d have a “robot spy” in your home. Your home is your sanctuary, and people rightfully want their privacy. If we have something gathering data in their home, we’re beyond the point where a company can exploit their customers by stealthily gathering data and selling it to other people. The things you buy and place in your home are there to benefit you, not some third party. That was the fear that was unleashed by this idea of gathering and selling data unbeknownst to the customer. At iRobot, we’ve said, “Look, we’re not going to do this, we’re not going to sell your data.” We don’t even remember your map unless you tell us we can. Our very explicit strategy is building this trusted relationship with our customers, so that they feel good about the benefits that Roomba has.

How could robots like Roomba eventually come to understand more about our homes to enable more sophisticated functionality?

Angle: We’re in the land of R&D here, not Roomba products, but certainly there exists object-recognition technology that can determine what a refrigerator is, what a television is, what a table is. It would be pretty straightforward to say, if the room contains a refrigerator and an oven, it’s probably the kitchen. If a room has a bed, it’s probably a bedroom. You’ll never be 100 percent right, but rooms have purposes, and we’re certainly on a path where just by observing, a robot could identify a room.

What else do you think a robot like a Roomba could ultimately understand about your home?

Angle: We’re working on a number of things, some of which we’re happy to talk about and some of which less so at this point in time. But, why should your thermostat be on the wall? Why is one convenient place on the wall of one room the right place to measure temperature from, as opposed to where you like to sit? When you get into home control, your sensor location can be critically tied to the operation of your home. The opportunity to have the robot carry sensors with it around the home would allow the expansion from a point reading to a 2D or 3D map of those readings. As a result, the customer has a lot more control over [for example] how loud the stereo system is at a specific point, or what the temperature is at a specific point. You could also detect anomalies in the home if things are not working the way the customer would like them to work. Those are some simple examples of why moving a sensor around would matter.

“There’s a pretty sizeable market for webcams in the home. People are interested in security and intruder detection, and also in how their pets are doing. But invariably, what you want to see is not what your camera is pointing at. That’s something where a robot makes a lot more sense.”

Another good example would be, there’s actually a pretty sizeable market for webcams in the home. People are interested in security and intruder detection, and also in how their pets are doing. But invariably, what you want to see is not what your camera is currently pointing at. Some people fill up their homes with cameras, or you put a camera on a robot, and it moves to where you want to look. That’s something where a robot makes a lot more sense, and it’s interesting, if I want to have privacy in our home and yet still have a camera I can use, it’s actually a great idea to put one on a robot, because when the robot isn’t in the room with you, it can’t see you. So, the metaphor is a lot closer to if you had a butler in your home— when they’re not around, you can have your privacy back. This is a metaphor that I think works really well as we try to architect smart homes that are both aware of themselves, and yet afford privacy.

So a mobile robot equipped with sensors and mapping technology to be able to understand your home in this way could act like a smart home manager?

Angle: A spatial information organizer. There’s a hub with a chunk of software that controls everything, and that’s not necessarily the same thing as what the robot would do. What Apple and Amazon and various smart home companies are doing are trying to build hubs where everything connects to them, but in order for these hubs to be actually smart, they need what I call spatial content: They need to understand what’s a room, and what’s in a room for the entire home. Ultimately, the home itself is turning into a robot, and if the robot’s not aware of itself, it can’t do the right things.

“A robot needs to understand what’s a room, and what’s in a room, for the entire home. Ultimately, the home itself is turning into a robot, and if the robot’s not aware of itself, it can’t do the right things.”

So, if you wanted to walk into a room and have the lights turn on and the heat come up, and if you started watching television and then left the room and wanted the television to turn off in the room you’d left and turn on in the room you’d gone to, all of those types of experiences where the home is seamlessly reacting to you require an understanding of rooms and what’s in each room. You can brute force that with lots of cameras and custom programming for your home, but I don’t believe that installations like this can be successful or scale. The solution where you own a Roomba anyway, and it just gives you all this information enabling your home to be smart, that’s an exciting vision of how we’re actually going to get smart homes.

What’s your current feeling about mobile manipulators in the home?

Angle: We’re getting there. In order for manipulation in the home to make sense, you need to know where you are, right? What’s the point of being able to get something if you don’t know where it is. So this idea that we need these maps that have information embedded in them about where stuff is and the ability to actually segment objects—there’s a hierarchy of understanding. You need to know where a room is as a first step. You need to identify where objects are—that’s recognition of larger objects. Then you need to be able to open a door, say, and now you’re processing larger objects to find handles that you can reach out and grab. The ability to do all of these things exists in research labs, and to an increasing degree in manufacturing facilities. We’re past the land of invention of the proof of principle, and into the land of, could we reduce this to a consumer price point that would make sense to people in the home. We’re well on the way—we will definitely see this kind of robot in, I would say, five to 10 years, we’ll have robots that can go and get you a beer. I don’t think it’s going to be a lot shorter than that, because we have a few steps to go, but it’s less invention and more engineering.

We should note that we spoke with Angle just before Neato announced their new D7 robot vacuum, which adds persistent, actionable maps, arguably the first step towards a better understanding of the home. Since Roombas already have the ability to make a similar sort of map, based on the sorts of things that Angle spoke about in our interview we’re expecting to see iRobot add a substantial amount of intelligence and functionality to the Roomba in the very near future.



This morning, Amazon and iRobot announced “a definitive merger agreement under which Amazon will acquire iRobot” for US $1.7 billion. The announcement was a surprise, to put it mildly, and we’ve barely had a chance to digest the news. But taking a look at what’s already known can still yield initial (if incomplete) answers as to why Amazon and iRobot want to team up—and whether the merger seems like a good idea.

The press release, like most press releases about acquisitions of this nature, doesn’t include much in the way of detail. But here are some quotes:

“We know that saving time matters, and chores take precious time that can be better spent doing something that customers love,” said Dave Limp, SVP of Amazon Devices. “Over many years, the iRobot team has proven its ability to reinvent how people clean with products that are incredibly practical and inventive—from cleaning when and where customers want while avoiding common obstacles in the home, to automatically emptying the collection bin. Customers love iRobot products—and I'm excited to work with the iRobot team to invent in ways that make customers' lives easier and more enjoyable.”“Since we started iRobot, our team has been on a mission to create innovative, practical products that make customers' lives easier, leading to inventions like the Roomba and iRobot OS,” said Colin Angle, chairman and CEO of iRobot. “Amazon shares our passion for building thoughtful innovations that empower people to do more at home, and I cannot think of a better place for our team to continue our mission. I’m hugely excited to be a part of Amazon and to see what we can build together for customers in the years ahead.”

There’s not much to go on here, and iRobot has already referred us to Amazon PR, which, to be honest, feels like a bit of a punch in the gut. I love (loved?) so many things about iRobot—their quirky early history working on weird DARPA projects and even weirder toys, everything they accomplished with the PackBot (and also this), and most of all, the fact that they’ve made a successful company building useful and affordable robots for the home, which is just…it’s so hard to do that I don’t even know where to start. And nobody knows what’s going to happen to iRobot going forward. I’m sure iRobot and Amazon have all kinds of plans and promises and whatnot, but still—I’m now nervous about iRobot’s future.

Why this is a good move for Amazon is clear, but what exactly is in it for iRobot?

It seems fairly obvious why Amazon wanted to get its hands on iRobot. Amazon has been working for years to integrate itself into homes, first with audio systems (Alexa), and then video (Ring), and more recently some questionable home robots of its own, like its indoor security drone and Astro. Amazon clearly needs some help in understanding how to make home robots useful, and iRobot can likely provide some guidance, with its extraordinarily qualified team of highly experienced engineers. And needless to say, iRobot is already well established in a huge number of homes, with brand recognition comparable to something like Velcro or Xerox, in the sense that people don’t have “robot vacuums,” they have Roombas.

All those Roombas in all of those homes are also collecting a crazy amount of data for iRobot. iRobot itself has been reasonably privacy-sensitive about this, but it would be naïve not to assume that Amazon sees a lot of potential for learning much, much more about what goes on in our living rooms. This is more concerning, because Amazon has its own ideas about data privacy, and it’s unclear what this will mean for increasingly camera-reliant Roombas going forward.

I get why this is a good move for Amazon, but I must admit that I’m still trying to figure out what exactly is in it for iRobot, besides of course that “$61 per share in an all-cash transaction valued at approximately $1.7 billion.” Which, to be fair, seems like a heck of a lot of money. Usually when these kinds of mergers happen (and I’m thinking back to Google acquiring all those robotics companies in 2013), the hypothetical appeal for the robotics company is that suddenly they have a bunch more resources to spend on exciting new projects along with a big support structure to help them succeed.

It’s true that iRobot has apparently had some trouble with finding ways to innovate and grow, with their biggest potential new consumer product (the Terra lawn mower) having been on pause since 2020. It could be that big pile of cash, plus not having to worry so much about growth as a publicly traded company, plus some new Amazon-ish projects to work on could be reason enough for this acquisition.

My worry, though, is that iRobot is just going to get completely swallowed into Amazon and effectively cease to exist in a meaningful and unique way. I hope that the relationship between Amazon and iRobot will be an exception to this historical trend. Plus, there is some precedent for this—Boston Dynamics, for example, has survived multiple acquisitions while keeping its technology and philosophy more or less independent and intact. It’ll be on iRobot to very aggressively act to preserve itself, and keeping Colin Angle as CEO is a good start.

We’ll be trying to track down more folks to talk to about this over the coming weeks for a more nuanced and in-depth perspective. In the meantime, make sure to give your Roomba a hug—it’s been quite a day for little round robot vacuums.



This morning, Amazon and iRobot announced “a definitive merger agreement under which Amazon will acquire iRobot” for US $1.7 billion. The announcement was a surprise, to put it mildly, and we’ve barely had a chance to digest the news. But taking a look at what’s already known can still yield initial (if incomplete) answers as to why Amazon and iRobot want to team up—and whether the merger seems like a good idea.

The press release, like most press releases about acquisitions of this nature, doesn’t include much in the way of detail. But here are some quotes:

“We know that saving time matters, and chores take precious time that can be better spent doing something that customers love,” said Dave Limp, SVP of Amazon Devices. “Over many years, the iRobot team has proven its ability to reinvent how people clean with products that are incredibly practical and inventive—from cleaning when and where customers want while avoiding common obstacles in the home, to automatically emptying the collection bin. Customers love iRobot products—and I'm excited to work with the iRobot team to invent in ways that make customers' lives easier and more enjoyable.”“Since we started iRobot, our team has been on a mission to create innovative, practical products that make customers' lives easier, leading to inventions like the Roomba and iRobot OS,” said Colin Angle, chairman and CEO of iRobot. “Amazon shares our passion for building thoughtful innovations that empower people to do more at home, and I cannot think of a better place for our team to continue our mission. I’m hugely excited to be a part of Amazon and to see what we can build together for customers in the years ahead.”

There’s not much to go on here, and iRobot has already referred us to Amazon PR, which, to be honest, feels like a bit of a punch in the gut. I love (loved?) so many things about iRobot—their quirky early history working on weird DARPA projects and even weirder toys, everything they accomplished with the PackBot (and also this), and most of all, the fact that they’ve made a successful company building useful and affordable robots for the home, which is just…it’s so hard to do that I don’t even know where to start. And nobody knows what’s going to happen to iRobot going forward. I’m sure iRobot and Amazon have all kinds of plans and promises and whatnot, but still—I’m now nervous about iRobot’s future.

Why this is a good move for Amazon is clear, but what exactly is in it for iRobot?

It seems fairly obvious why Amazon wanted to get its hands on iRobot. Amazon has been working for years to integrate itself into homes, first with audio systems (Alexa), and then video (Ring), and more recently some questionable home robots of its own, like its indoor security drone and Astro. Amazon clearly needs some help in understanding how to make home robots useful, and iRobot can likely provide some guidance, with its extraordinarily qualified team of highly experienced engineers. And needless to say, iRobot is already well established in a huge number of homes, with brand recognition comparable to something like Velcro or Xerox, in the sense that people don’t have “robot vacuums,” they have Roombas.

All those Roombas in all of those homes are also collecting a crazy amount of data for iRobot. iRobot itself has been reasonably privacy-sensitive about this, but it would be naïve not to assume that Amazon sees a lot of potential for learning much, much more about what goes on in our living rooms. This is more concerning, because Amazon has its own ideas about data privacy, and it’s unclear what this will mean for increasingly camera-reliant Roombas going forward.

I get why this is a good move for Amazon, but I must admit that I’m still trying to figure out what exactly is in it for iRobot, besides of course that “$61 per share in an all-cash transaction valued at approximately $1.7 billion.” Which, to be fair, seems like a heck of a lot of money. Usually when these kinds of mergers happen (and I’m thinking back to Google acquiring all those robotics companies in 2013), the hypothetical appeal for the robotics company is that suddenly they have a bunch more resources to spend on exciting new projects along with a big support structure to help them succeed.

It’s true that iRobot has apparently had some trouble with finding ways to innovate and grow, with their biggest potential new consumer product (the Terra lawn mower) having been on pause since 2020. It could be that big pile of cash, plus not having to worry so much about growth as a publicly traded company, plus some new Amazon-ish projects to work on could be reason enough for this acquisition.

My worry, though, is that iRobot is just going to get completely swallowed into Amazon and effectively cease to exist in a meaningful and unique way. I hope that the relationship between Amazon and iRobot will be an exception to this historical trend. Plus, there is some precedent for this—Boston Dynamics, for example, has survived multiple acquisitions while keeping its technology and philosophy more or less independent and intact. It’ll be on iRobot to very aggressively act to preserve itself, and keeping Colin Angle as CEO is a good start.

We’ll be trying to track down more folks to talk to about this over the coming weeks for a more nuanced and in-depth perspective. In the meantime, make sure to give your Roomba a hug—it’s been quite a day for little round robot vacuums.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

IEEE CASE 2022: 20–24 August 2022, MEXICO CITYCLAWAR 2022: 12–14 September 2022, AZORES, PORTUGALANA Avatar XPRIZE Finals: 4–5 November 2022, LOS ANGELESCoRL 2022: 14–18 December 2022, AUCKLAND, NEW ZEALAND

Enjoy today's videos!

This probably counts as hard mode for Ikea chair assembly.

[ Naver Lab ]

As anyone working with robotics knows, it’s mandatory to spend at least 10 percent of your time just mucking about with them because it’s fun, as GITAI illustrates with its new 10-meter robotic arm.

[ GITAI ]

Well, this is probably the weirdest example of domain randomization in simulation for quadrupeds that I’ve ever seen.

[ RSL ]

The RoboCup 2022 was held in Bangkok, Thailand. The final match was between B-Human from Bremen (jerseys in black) and HTWK Robots from Leipzig (jerseys in blue). The video starts with one of our defending robots starting a duel with the opponent. After a short time a pass is made to another robot, which tries to score a goal, but the opponent goalie is able to catch the ball. Afterwards another attacker robot is already waiting at the center circle, to take its chance to score a goal, through all four opponent robots.

[ Team B-Human ]

The mission to return Martian samples back to Earth will see a European 2.5-meter-long robotic arm pick up tubes filled with precious soil from Mars and transfer them to a rocket for a historic interplanetary delivery.

[ ESA ]

I still cannot believe that this is an approach to robotic fruit-picking that actually works.

[ Tevel Aerobotics ]

This video shows the basic performance of the humanoid robot Torobo, which is used as a research platform for JST’s Moonshot R&D program.

[ Tokyo Robotics ]

Volocopter illustrates why I always carry two violins with me everywhere. You know, just in case.

[ Volocopter ]

We address the problem of enabling quadrupedal robots to perform precise shooting skills in the real world using reinforcement learning. Developing algorithms to enable a legged robot to shoot a soccer ball to a given target is a challenging problem that combines robot motion control and planning into one task.

[ Hybrid Robotics ]

I will always love watching Cassie try very, very hard to not fall over, and then fall over. <3

[ Michigan Robotics ]

I don’t think this paper is about teaching bipeds to walk with attitude, but it should be.

[ DLG ]

Modboats are capable of collective swimming in arbitrary configurations! In this video you can see three different configurations of the Modboats swim across our test space and demonstrate their capabilities.

[ ModLab ]

How have we built our autonomous driving technology to navigate the world safely? It comes down to three easy steps: Sense, Solve, and Go. Using a combination of lidar, camera, radar, and compute, the Waymo Driver can visualize the world, calculate what others may do, and proceed smoothly and safely, day and night.

[ Waymo ]

Alan Alda discusses evolutionary robotics with Hod Lipson and Jordan Pollack on Scientific American Frontiers in 1999.

[ Creative Machines Lab ]

Brady Watkins gives us insight into how a big company like Softbank Robotics looks into the robotics market.

[ Robohub ]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

IEEE CASE 2022: 20–24 August 2022, MEXICO CITYCLAWAR 2022: 12–14 September 2022, AZORES, PORTUGALANA Avatar XPRIZE Finals: 4–5 November 2022, LOS ANGELESCoRL 2022: 14–18 December 2022, AUCKLAND, NEW ZEALAND

Enjoy today's videos!

This probably counts as hard mode for Ikea chair assembly.

[ Naver Lab ]

As anyone working with robotics knows, it’s mandatory to spend at least 10 percent of your time just mucking about with them because it’s fun, as GITAI illustrates with its new 10-meter robotic arm.

[ GITAI ]

Well, this is probably the weirdest example of domain randomization in simulation for quadrupeds that I’ve ever seen.

[ RSL ]

The RoboCup 2022 was held in Bangkok, Thailand. The final match was between B-Human from Bremen (jerseys in black) and HTWK Robots from Leipzig (jerseys in blue). The video starts with one of our defending robots starting a duel with the opponent. After a short time a pass is made to another robot, which tries to score a goal, but the opponent goalie is able to catch the ball. Afterwards another attacker robot is already waiting at the center circle, to take its chance to score a goal, through all four opponent robots.

[ Team B-Human ]

The mission to return Martian samples back to Earth will see a European 2.5-meter-long robotic arm pick up tubes filled with precious soil from Mars and transfer them to a rocket for a historic interplanetary delivery.

[ ESA ]

I still cannot believe that this is an approach to robotic fruit-picking that actually works.

[ Tevel Aerobotics ]

This video shows the basic performance of the humanoid robot Torobo, which is used as a research platform for JST’s Moonshot R&D program.

[ Tokyo Robotics ]

Volocopter illustrates why I always carry two violins with me everywhere. You know, just in case.

[ Volocopter ]

We address the problem of enabling quadrupedal robots to perform precise shooting skills in the real world using reinforcement learning. Developing algorithms to enable a legged robot to shoot a soccer ball to a given target is a challenging problem that combines robot motion control and planning into one task.

[ Hybrid Robotics ]

I will always love watching Cassie try very, very hard to not fall over, and then fall over. <3

[ Michigan Robotics ]

I don’t think this paper is about teaching bipeds to walk with attitude, but it should be.

[ DLG ]

Modboats are capable of collective swimming in arbitrary configurations! In this video you can see three different configurations of the Modboats swim across our test space and demonstrate their capabilities.

[ ModLab ]

How have we built our autonomous driving technology to navigate the world safely? It comes down to three easy steps: Sense, Solve, and Go. Using a combination of lidar, camera, radar, and compute, the Waymo Driver can visualize the world, calculate what others may do, and proceed smoothly and safely, day and night.

[ Waymo ]

Alan Alda discusses evolutionary robotics with Hod Lipson and Jordan Pollack on Scientific American Frontiers in 1999.

[ Creative Machines Lab ]

Brady Watkins gives us insight into how a big company like Softbank Robotics looks into the robotics market.

[ Robohub ]

Traditional aerial manipulation systems were usually composed of rigid-link manipulators attached to an aerial platform, arising several rigidity-related issues such as difficulties of reach, compliant motion, adaptability to object’s shape and pose uncertainties, and safety of human-manipulator interactions, especially in unstructured and confined environments. To address these issues, partially compliant manipulators, composed of rigid links and compliant/flexible joints, were proposed; however, they still suffer from insufficient dexterity and maneuverability. In this article, a new set of compliant aerial manipulators is suggested. For this purpose, the concept of aerial continuum manipulation system (ACMS) is introduced, several conceptual configurations are proposed, and the functionalities of ACMSs for different applications are discussed. Then, the performances of proposed aerial manipulators are compared with conventional aerial manipulators by implementing available benchmarks in the literature. To enhance the comparison, new features with related benchmarks are presented and used for evaluation purposes. In this study, the advantages of ACMSs over their rigid-link counterparts are illustrated and the potential applications of ACMSs are suggested. The open problems such as those related to dynamic coupling and control of ACMSs are also highlighted.

Social robots are increasingly developed for the companionship of children. In this article we explore the moral implications of children-robot friendships using the Aristotelian framework of virtue ethics. We adopt a moderate position and argue that, although robots cannot be virtue friends, they can nonetheless enable children to exercise ethical and intellectual virtues. The Aristotelian requirements for true friendship apply only partly to children: unlike adults, children relate to friendship as an educational play of exploration, which is constitutive of the way they acquire and develop virtues. We highlight that there is a relevant difference between the way we evaluate adult-robot friendship compared to children-robot friendship, which is rooted in the difference in moral agency and moral responsibility that generate the asymmetries in the moral status ascribed to adults versus children. We look into the role played by imaginary companions (IC) and personified objects (PO) in children’s moral development and claim that robots, understood as Personified Robotic Objects (PROs), play a similar role with such fictional entities, enabling children to exercise affection, moral imagination and reasoning, thus contributing to their development as virtuous adults. Nonetheless, we argue that adequate use of robots for children’s moral development is conditioned by several requirements related to design, technology and moral responsibility.

Retinal vein injection guided by microscopic image is an innovative procedure for treating retinal vein occlusion. However, the retina organization is complex, fine, and weak, and the operation scale and force are small. Surgeons’ limited operation and force-sensing accuracy make it difficult to perform precise and stable drug injection operations on the retina in a magnified field of image vision. In this paper, a 3-DOF automatic drug injection mechanism was designed for microscopic image guiding robot-assisted needle delivery and automatic drug injection. Additionally, the robot-assisted real-time three-dimensional micro-force-sensing method for retinal vein injection was proposed. Based on the layout of three FBG sensors on the hollow outer wall of the nested needle tube in a circular array of nickel-titanium alloys, the real-time sensing of the contact force between the intraoperative instrument and the blood vessel was realized. The experimental data of 15 groups of porcine eyeball retinal veins with diameters of 100–200 μm showed that the piercing force of surgical instruments and blood vessels is 5.95∼12.97 mN, with an average value of 9.98 mN. Furthermore, 20 groups of experimental measurements on chicken embryo blood vessels with diameters of 150–500 μm showed that the piercing force was 4.02∼23.4 mN, with an average value of 12.05 mN.



I don’t know about you, but being stuck at home during the pandemic made me realize two things. Thing the first: My entire life is disorganized. And thing the second: Organizing my life, and then keeping organized, is a pain in the butt. This is especially true for those of us stuck in apartments that are a bit smaller than we’d like them to be. With space at a premium, Mengni Zhang, a Ph.D. student at Cornell’s Architectural Robotics Lab, looked beyond floor space. Zhang wants to take advantage of wall space—even if it’s not easily reachable—using a small swarm of robot shelves that offer semiautonomous storage on demand.

During the pandemic I saw an increased number of articles advising people to clean up and declutter at home. We know the health benefits of maintaining an organized lifestyle, yet I could not find many empirical studies on understanding organizational behaviors, or examples of domestic robotic organizers for older adults or users with mobility impairments. There are already many assistive technologies, but most are floor based, which may not work so well for people living in small urban apartments. So, I tried to focus more on indoor wall-climbing robots, sort of like Roomba but on the wall.

The main goal was to quickly build a series of functional prototypes (here I call them SORT, which stands for “Self-Organizing Robot Team”) to conduct user studies to understand different people's preferences and perceptions toward this organizer concept. By helping people declutter and rearrange personal items on walls and delivering them to users as needed or wanted while providing some ambient interactions, I’m hoping to use these robots to improve quality of life and enhance our home environments.

This idea of intelligent architecture is a compelling one, I think—it’s sort of like the Internet of Things, except with an actuated physical embodiment that makes it more useful. Personally, I like to imagine hanging a coat on one of these little dudes and having it whisked up out of the way, or maybe they could even handle my bike, if enough of them work together. As Zhang points out, this concept could be especially useful for folks with disabilities who need additional workspace flexibility.

Besides just object handling, it’s easy to imagine these little robots operating as displays, as artwork, as sun-chasing planters, lights, speakers, or anything else. It’s just a basic proof of concept at the moment, and one that does require a fair amount of infrastructure to function in its current incarnation (namely, ferrous walls), but I certainly appreciate the researcher’s optimism in suggesting that “wall-climbing robots like the ones we present might become a next ‘killer app’ in robotics, providing assistance and improving life quality.”



I don’t know about you, but being stuck at home during the pandemic made me realize two things. Thing the first: My entire life is disorganized. And thing the second: Organizing my life, and then keeping organized, is a pain in the butt. This is especially true for those of us stuck in apartments that are a bit smaller than we’d like them to be. With space at a premium, Mengni Zhang, a Ph.D. student at Cornell’s Architectural Robotics Lab, looked beyond floor space. Zhang wants to take advantage of wall space—even if it’s not easily reachable—using a small swarm of robot shelves that offer semiautonomous storage on demand.

During the pandemic I saw an increased number of articles advising people to clean up and declutter at home. We know the health benefits of maintaining an organized lifestyle, yet I could not find many empirical studies on understanding organizational behaviors, or examples of domestic robotic organizers for older adults or users with mobility impairments. There are already many assistive technologies, but most are floor based, which may not work so well for people living in small urban apartments. So, I tried to focus more on indoor wall-climbing robots, sort of like Roomba but on the wall.

The main goal was to quickly build a series of functional prototypes (here I call them SORT, which stands for “Self-Organizing Robot Team”) to conduct user studies to understand different people's preferences and perceptions toward this organizer concept. By helping people declutter and rearrange personal items on walls and delivering them to users as needed or wanted while providing some ambient interactions, I’m hoping to use these robots to improve quality of life and enhance our home environments.

This idea of intelligent architecture is a compelling one, I think—it’s sort of like the Internet of Things, except with an actuated physical embodiment that makes it more useful. Personally, I like to imagine hanging a coat on one of these little dudes and having it whisked up out of the way, or maybe they could even handle my bike, if enough of them work together. As Zhang points out, this concept could be especially useful for folks with disabilities who need additional workspace flexibility.

Besides just object handling, it’s easy to imagine these little robots operating as displays, as artwork, as sun-chasing planters, lights, speakers, or anything else. It’s just a basic proof of concept at the moment, and one that does require a fair amount of infrastructure to function in its current incarnation (namely, ferrous walls), but I certainly appreciate the researcher’s optimism in suggesting that “wall-climbing robots like the ones we present might become a next ‘killer app’ in robotics, providing assistance and improving life quality.”



This article is part of our exclusive IEEE Journal Watch series in partnership with IEEE Xplore.

The ocean contains a seemingly endless expanse of territory yet to be explored—and mapping out these uncharted waters globally poses a daunting task. Fleets of autonomous underwater robots could be invaluable tools to help with mapping, but these need to be able to navigate cluttered areas while remaining efficient and accurate.

In a study published 24 June in the IEEE Journal of Oceanic Engineering, one research team has developed a novel framework that allows autonomous underwater robots to map cluttered areas with high efficiency and low error rates.

A major challenge in mapping underwater environments is the uncertainty of the robot’s position.

“Because GPS is not available underwater, most underwater robots do not have an absolute position reference, and the accuracy of their navigation solution varies,” explains Brendan Englot, an associate professor of mechanical engineering at the Stevens Institute of Technology, in Hoboken, N.J., who was involved in the study. “Predicting how it will vary as a robot explores uncharted territory will permit an autonomous underwater vehicle to build the most accurate map possible under these challenging circumstances.”

The model created by Englot’s team uses a virtual map that abstractly represents the surrounding area that the robot hasn’t seen yet. They developed an algorithm that plans a route over this virtual map in a way that takes the robot's localization uncertainty and perceptual observations into account.

The perceptual observations are collected using sonar imaging, which helps detect objects in the environment in front of the robot within a 30-meter range and a 120-degree field of view. “We process the imagery to obtain a point cloud from every sonar image. These point clouds indicate where underwater structures are located relative to the robot,” explains Englot.

The research team then tested their approach using a BlueROV2 underwater robot in a harbor at Kings Point, N.Y., an area that Englot says was large enough to permit significant navigation errors to build up, but small enough to perform numerous experimental trials without too much difficulty. The team compared their model to several other existing ones, testing each model in at least three 30-minute trials in which the robot navigated the harbor. The different models were also evaluated through simulations.

“The results revealed that each of the competing [models] had its own unique advantages, but ours offered a very appealing compromise between exploring unknown environments quickly while building accurate maps of those environments,” says Englot.

He notes that his team has applied for a patent that would consider their model for subsea oil and gas production purposes. However, they envision that the model will also be useful for a broader set of applications, such as inspecting offshore wind turbines, offshore aquaculture infrastructure (including fish farms), and civil infrastructure, such as piers and bridges.

“Next, we would like to extend the technique to 3D mapping scenarios, as well as situations where a partial map may already exist, and we want a robot to make effective use of that map, rather than explore an environment completely from scratch,” says Englot. “If we can successfully extend our framework to work in 3D mapping scenarios, we may also be able to use it to explore networks of underwater caves or shipwrecks.”



This article is part of our exclusive IEEE Journal Watch series in partnership with IEEE Xplore.

The ocean contains a seemingly endless expanse of territory yet to be explored—and mapping out these uncharted waters globally poses a daunting task. Fleets of autonomous underwater robots could be invaluable tools to help with mapping, but these need to be able to navigate cluttered areas while remaining efficient and accurate.

In a study published 24 June in the IEEE Journal of Oceanic Engineering, one research team has developed a novel framework that allows autonomous underwater robots to map cluttered areas with high efficiency and low error rates.

A major challenge in mapping underwater environments is the uncertainty of the robot’s position.

“Because GPS is not available underwater, most underwater robots do not have an absolute position reference, and the accuracy of their navigation solution varies,” explains Brendan Englot, an associate professor of mechanical engineering at the Stevens Institute of Technology, in Hoboken, N.J., who was involved in the study. “Predicting how it will vary as a robot explores uncharted territory will permit an autonomous underwater vehicle to build the most accurate map possible under these challenging circumstances.”

The model created by Englot’s team uses a virtual map that abstractly represents the surrounding area that the robot hasn’t seen yet. They developed an algorithm that plans a route over this virtual map in a way that takes the robot's localization uncertainty and perceptual observations into account.

The perceptual observations are collected using sonar imaging, which helps detect objects in the environment in front of the robot within a 30-meter range and a 120-degree field of view. “We process the imagery to obtain a point cloud from every sonar image. These point clouds indicate where underwater structures are located relative to the robot,” explains Englot.

The research team then tested their approach using a BlueROV2 underwater robot in a harbor at Kings Point, N.Y., an area that Englot says was large enough to permit significant navigation errors to build up, but small enough to perform numerous experimental trials without too much difficulty. The team compared their model to several other existing ones, testing each model in at least three 30-minute trials in which the robot navigated the harbor. The different models were also evaluated through simulations.

“The results revealed that each of the competing [models] had its own unique advantages, but ours offered a very appealing compromise between exploring unknown environments quickly while building accurate maps of those environments,” says Englot.

He notes that his team has applied for a patent that would consider their model for subsea oil and gas production purposes. However, they envision that the model will also be useful for a broader set of applications, such as inspecting offshore wind turbines, offshore aquaculture infrastructure (including fish farms), and civil infrastructure, such as piers and bridges.

“Next, we would like to extend the technique to 3D mapping scenarios, as well as situations where a partial map may already exist, and we want a robot to make effective use of that map, rather than explore an environment completely from scratch,” says Englot. “If we can successfully extend our framework to work in 3D mapping scenarios, we may also be able to use it to explore networks of underwater caves or shipwrecks.”



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

IEEE CASE 2022: 20–24 August 2022, MEXICO CITYCLAWAR 2022: 12–14 September 2022, AZORES, PORTUGALANA Avatar XPRIZE Finals: 4–5 November 2022, LOS ANGELESCoRL 2022: 14–18 December 2022, AUCKLAND, NEW ZEALAND

Enjoy today’s videos!

A University of Washington team created a new tool that can design a 3D-printable passive gripper and calculate the best path to pick up an object. The team tested this system on a suite of 22 objects—including a 3D-printed bunny, a doorstop-shaped wedge, a tennis ball and a drill.

[ UW ]

Combining off-the-shelf components with 3D-printing, the Wheelbot is a symmetric reaction wheel unicycle that can jump onto its wheels from any initial position. With non-holonomic and under-actuated dynamics, as well as two coupled unstable degrees of freedom, the Wheelbot provides a challenging platform for nonlinear and data-driven control research.

[ Wheelbot ]

I think we posted a similar video to this before, but it’s so soothing and beautifully shot and this time it’s fully autonomous. Watch until the end for a very impressive catch!

[ Griffin ]

Quad-SDK is an open source, ROS-based full stack software framework for agile quadrupedal locomotion. The design of Quad-SDK is focused on the vertical integration of planning, control, estimation, communication, and development tools which enable agile quadrupedal locomotion in simulation and hardware with minimal user changes for multiple platforms.

[ Quad-SDK ]

Zenta makes some of the best legged robots out there, including MorpHex, which appears to be still going strong.

And now, a relaxing ride with MXPhoenix :

[ Zenta ]

We have developed a set of teleoperation strategies using human hand gestures and arm movements to fully teleoperate a legged manipulator through whole-body control. To validate the system, a pedal bin item disposal demo was conducted to show that the robot could exploit its kinematics redundancy to follow human commands while respecting certain motion constraints, such as keeping balance.

[ University of Leeds ]

Thanks Chengxu!

An introduction to HEBI’s Robotics line of modular mobile bases for confined spaces, dirty environments, and magnetic crawling.

[ HEBI Robotics ]

Thanks Kamal!

Loopy is a robotic swarm of 1- Degree of Freedom (DOF) agents (i.e., a closed-loop made of 36 Dynamixel servos). In this iteration of Loopy, agents use average consensus to determine the orientation of a received shape that requires the least amount of movement. In this video, Loopy is spelling out the Alphabet.

[ WVUIRL ]

The latest robotics from DLR, as shared by Bram Vanderborght.

[ DLR ]

Picking a specific object from clutter is an essential component of many manipulation tasks. Partial observations often require the robot to collect additional views of the scene before attempting a grasp. This paper proposes a closed-loop next-best-view planner that drives exploration based on occluded object parts.

[ Active Grasp ]

This novel flying system combines an autonomous unmanned aerial vehicle with ground penetrating radar to detect buried objects such as landmines. The system stands out with sensor specific low altitude flight maneuvers and high accuracy position estimation. Experiments show radar detections of targets buried in sand.

[ FindMine ]

In this experiment, we demonstrate a combined exploration and inspection mission using the RMF-Owl collision tolerant aerial robot inside the Nutec RelyOn facilities in Trondheim, Norway. The robot is tasked to autonomously explore and inspect the surfaces of the environment—within a height boundary—with its onboard camera sensor given no prior knowledge of the map.

[ NTNU ]

Delivering donuts to our incredible Turtlebot 4 development team! Includes a full walkthrough of the mapping and navigation capabilities of the Turtlebot 4 mobile robot with Maddy Thomson, Robotics Demo Designer from Clearpath Robotics. Create a map of your environment using SLAM Toolbox and learn how to get your Turtlebot 4 to navigate that map autonomously to a destination with the ROS 2 navigation stack.

[ Clearpath ]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

IEEE CASE 2022: 20–24 August 2022, MEXICO CITYCLAWAR 2022: 12–14 September 2022, AZORES, PORTUGALANA Avatar XPRIZE Finals: 4–5 November 2022, LOS ANGELESCoRL 2022: 14–18 December 2022, AUCKLAND, NEW ZEALAND

Enjoy today’s videos!

A University of Washington team created a new tool that can design a 3D-printable passive gripper and calculate the best path to pick up an object. The team tested this system on a suite of 22 objects—including a 3D-printed bunny, a doorstop-shaped wedge, a tennis ball and a drill.

[ UW ]

Combining off-the-shelf components with 3D-printing, the Wheelbot is a symmetric reaction wheel unicycle that can jump onto its wheels from any initial position. With non-holonomic and under-actuated dynamics, as well as two coupled unstable degrees of freedom, the Wheelbot provides a challenging platform for nonlinear and data-driven control research.

[ Wheelbot ]

I think we posted a similar video to this before, but it’s so soothing and beautifully shot and this time it’s fully autonomous. Watch until the end for a very impressive catch!

[ Griffin ]

Quad-SDK is an open source, ROS-based full stack software framework for agile quadrupedal locomotion. The design of Quad-SDK is focused on the vertical integration of planning, control, estimation, communication, and development tools which enable agile quadrupedal locomotion in simulation and hardware with minimal user changes for multiple platforms.

[ Quad-SDK ]

Zenta makes some of the best legged robots out there, including MorpHex, which appears to be still going strong.

And now, a relaxing ride with MXPhoenix :

[ Zenta ]

We have developed a set of teleoperation strategies using human hand gestures and arm movements to fully teleoperate a legged manipulator through whole-body control. To validate the system, a pedal bin item disposal demo was conducted to show that the robot could exploit its kinematics redundancy to follow human commands while respecting certain motion constraints, such as keeping balance.

[ University of Leeds ]

Thanks Chengxu!

An introduction to HEBI’s Robotics line of modular mobile bases for confined spaces, dirty environments, and magnetic crawling.

[ HEBI Robotics ]

Thanks Kamal!

Loopy is a robotic swarm of 1- Degree of Freedom (DOF) agents (i.e., a closed-loop made of 36 Dynamixel servos). In this iteration of Loopy, agents use average consensus to determine the orientation of a received shape that requires the least amount of movement. In this video, Loopy is spelling out the Alphabet.

[ WVUIRL ]

The latest robotics from DLR, as shared by Bram Vanderborght.

[ DLR ]

Picking a specific object from clutter is an essential component of many manipulation tasks. Partial observations often require the robot to collect additional views of the scene before attempting a grasp. This paper proposes a closed-loop next-best-view planner that drives exploration based on occluded object parts.

[ Active Grasp ]

This novel flying system combines an autonomous unmanned aerial vehicle with ground penetrating radar to detect buried objects such as landmines. The system stands out with sensor specific low altitude flight maneuvers and high accuracy position estimation. Experiments show radar detections of targets buried in sand.

[ FindMine ]

In this experiment, we demonstrate a combined exploration and inspection mission using the RMF-Owl collision tolerant aerial robot inside the Nutec RelyOn facilities in Trondheim, Norway. The robot is tasked to autonomously explore and inspect the surfaces of the environment—within a height boundary—with its onboard camera sensor given no prior knowledge of the map.

[ NTNU ]

Delivering donuts to our incredible Turtlebot 4 development team! Includes a full walkthrough of the mapping and navigation capabilities of the Turtlebot 4 mobile robot with Maddy Thomson, Robotics Demo Designer from Clearpath Robotics. Create a map of your environment using SLAM Toolbox and learn how to get your Turtlebot 4 to navigate that map autonomously to a destination with the ROS 2 navigation stack.

[ Clearpath ]



NASA has announced a conceptual mission architecture for the Mars Sample Return (MSR) program, and it’s a pleasant surprise. The goal of the proposed program is to return the rock samples that the Perseverance rover is currently collecting on the Martian surface to Earth, which, as you can imagine, is not a simple process. It’ll involve sending a sample-return lander (SRL) to Mars, getting those samples back to the lander, launching a rocket back to Mars orbit from the lander, and finally capturing that rocket with an orbiter that’ll cart the samples back to Earth.

As you might expect, the initial idea was to send a dedicated rover to go grab the sample tubes from wherever Perseverance had cached them and bring them back to the lander with the rocket, because how else are you going to go get them, right? But NASA has decided that Plan A is for Perseverance to drive the samples to the SRL itself. Plan B, if Perseverance can’t make it, is to collect the samples with two helicopters instead.

NASA’s approach here is driven by two things: First, Curiosity has been on Mars for 10 years, and is still doing great. Perseverance is essentially an improved version of Curiosity, giving NASA confidence that the newer rover will still be happily roving by the time the SRL lands. And second, the Ingenuity helicopter is also still doing awesome, which is (let’s be honest) kind of a surprise, considering that it’s a tech demo that was never designed for the kind of performance that we’ve seen. NASA now seems to believe that helicopters are a viable tool for operating on the Martian surface, and therefore should be considered as an option for Mars operations.

In the new sample-return mission concept, Perseverance will continue collecting samples as it explores the river delta in Jezero crater. It’ll collect duplicates of each sample, and once it has 10 samples (20 tubes’ worth), it’ll cache the duplicates somewhere on the surface as a sort of backup plan. From there, Percy will keep exploring and collecting samples (but not duplicates) as it climbs out of the Jezero crater, where it’ll meet the sample-return lander in mid-2030. NASA says that the SRL will be designed with pinpoint landing capability, able to touch down within 50 meters of where NASA wants it to, meaning that a rendezvous with Perseverance should be a piece of cake—or as much of a piece of cake as landing on Mars can ever be. After Perseverance drives up to the SRL, a big arm on the SRL will pluck the sample tubes out of Perseverance and load them into a rocket, and then off they go to orbit and eventually back to Earth, probably by 2033.

The scenario described above is how everything is supposed to work, but it depends entirely on Perseverance doing what it’s supposed to do. If the rover is immobilized, the SRL will still be able to land nearby, but those sample tubes will have to get back to the SRL somehow, and NASA has decided that the backup plan will be helicopters.

The two “Ingenuity class” helicopters that the SRL will deliver to Mars will be basically the same size as Ingenuity, although a little bit heavier. There are two big differences: first, each helicopter gets a little arm for grabbing sample tubes (which weigh between 100 and 150 grams each) off of the Martian surface. And second, the helicopters get small wheels at the end of each of their legs. It sounds like these wheels will be powered, and while they’re not going to offer a lot of mobility, presumably it’ll be enough so that if the helicopter lands close to a sample, it can drive itself a short distance to get within grabbing distance. Here’s how Richard Cook, the Mars sample-return program manager at JPL, says the helicopters would work:

“In the scenario where the helicopters are used [for sample retrieval], each of the helicopters would be able to operate independently. They’d fly out to the [sample] depot location from where SRL landed, land in proximity to the sample tubes, roll over to them, pick one up, then fly back in proximity to the lander, roll up to the lander, and drop [the tube] onto the ground in a spot where the [European Space Agency] sample transfer arm could pick it up and put it into the [Mars Ascent Vehicle]. Each helicopter would be doing that separately, and over the course of four or five days per tube, would bring all the sample tubes back to the lander that way.”

This assumes that Perseverance didn’t explode or roll down a hill or something, and that it would be able to drop its sample tubes on the ground for the helicopters to pick up. Worst case, if Percy completely bites it, the SRL could land near the backup sample cache in the Jezero crater and the helicopters could grab those instead.

Weirdly, the primary mission of the helicopters is as a backup to Perseverance, meaning that if the rover is healthy and able to deliver the samples to the SRL itself, the helicopters won’t have much to do. NASA says they could “observe the area around the lander,” which seems underwhelming, or take pictures of the Mars Ascent Vehicle launch, which seems awesome but not really worth sending two helicopters to Mars for. I’m assuming that this’ll get explored a little more, because it seems like a potential wasted opportunity otherwise.

We’re hoping that this announcement won’t have any impact on JPL’s concept for a much larger, much more capable Mars Science Helicopter, but this sample-return mission (rather than a new science mission) is clearly the priority right now. The most optimistic way of looking at it is that this sample-return-mission architecture is a strong vote of confidence by NASA in helicopters on Mars in general, making a flagship helicopter mission that much more likely. But we’re keeping our fingers crossed.

Pages