Feed aggregator



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

HRI 2024: 11–15 March 2024, BOULDER, COLO.Eurobot Open 2024: 8–11 May 2024, LA ROCHE-SUR-YON, FRANCEICRA 2024: 13–17 May 2024, YOKOHAMA, JAPANRoboCup 2024: 17–22 July 2024, EINDHOVEN, NETHERLANDS

Enjoy today’s videos!

Legged robots have the potential to become vital in maintenance, home support, and exploration scenarios. In order to interact with and manipulate their environments, most legged robots are equipped with a dedicated robot arm, which means additional mass and mechanical complexity compared to standard legged robots. In this work, we explore pedipulation—using the legs of a legged robot for manipulation.

This work, by Philip Arm, Mayank Mittal, Hendrik Kolvenbach, and Marco Hutter from ETHZ RSL, will be presented at the IEEE International Conference on Robotics and Automation (ICRA 2024) in May in Japan (see events calendar above).

[ Pedipulate ]

I learned a new word today! “Stigmergy.” Stigmergy is a kind of group coordination that’s based on environmental modification. Like, when insects leave pheromone trails, they’re not directly sending messages to other individuals, but as a group the ants are able to manifest surprisingly complex coordinated behaviors. Cool, right? Researchers are IRIDIA are exploring the possibilities for robots using stigmergy with a cool ‘artificial pheromone’ system using a UV-sensitive surface.

“Automatic design of stigmergy-based behaviors for robot swarms,” by Muhammad Salman, David Garzón Ramos, and Mauro Birattari, is published in the journal Communications Engineering.

[ Nature ] via [ IRIDIA ]

Thanks, David!

Filmed in July 2017, this video shows Atlas walking through a “hatch” on a pitching surface. This uses autonomous behaviors, with the robot not knowing about the rocking world. Robot built by Boston Dynamics for the DARPA Robotics Challenge in 2013. Software by IHMC Robotics.

[ IHMC ]

That IHMC video reminded me of the SAFFiR program for Shipboard Autonomous Firefighting Robots, which is responsible for a bunch of really cool research in partnership with the United States Naval Research Laboratory. NRL did some interesting stuff with Nexi robots from MIT and made their own videos. That effort I think didn’t get nearly enough credit for being very entertaining while communicating important robotics research.

[ NRL ]

I want more robot videos with this energy.

[ MIT CSAIL ]

Large industrial asset operators increasingly use robotics to automate hazardous work at their facilities. This has led to soaring demand for autonomous inspection solutions like ANYmal. Series production by our partner Zollner enables ANYbotics to supply our customers with the required quantities of robots.

[ ANYbotics ]

This week is Grain Bin Safety Week, and Grain Weevil is here to help.

[ Grain Weevil ]

Oof, this is some heavy, heavy deep-time stuff.

[ Onkalo ]

And now, this.

[ RozenZebet ]

Hawkeye is a real time multimodal conversation and interaction agent for the Boston Dynamics’ mobile robot Spot. Leveraging OpenAI’s experimental GPT-4 Turbo and Vision AI models, Hawkeye aims to empower everyone, from seniors to healthcare professionals in forming new and unique interactions with the world around them.

That moment at 1:07 is so relatable.

[ Hawkeye ]

Wing would really prefer that if you find one of their drones on the ground, you don’t run off with it.

[ Wing ]

The rover Artemis, developed at the DFKI Robotics Innovation Center, has been equipped with a penetrometer that measures the soil’s penetration resistance to obtain precise information about soil strength. The video showcases an initial test run with the device mounted on the robot. During this test, the robot was remotely controlled, and the maximum penetration depth was limited to 15 mm.

[ DFKI ]

To efficiently achieve complex humanoid loco-manipulation tasks in industrial contexts, we propose a combined vision-based tracker-localization interplay integrated as part of a task-space whole-body optimization control. Our approach allows humanoid robots, targeted for industrial manufacturing, to manipulate and assemble large-scale objects while walking.

[ Paper ]

We developed a novel multi-body robot (called the Two-Body Bot) consisting of two small-footprint mobile bases connected by a four bar linkage where handlebars are mounted. Each base measures only 29.2 cm wide, making the robot likely the slimmest ever developed for mobile postural assistance.

[ MIT ]

Lex Fridman interviews Marc Raibert.

[ Lex Fridman ]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

HRI 2024: 11–15 March 2024, BOULDER, COLO.Eurobot Open 2024: 8–11 May 2024, LA ROCHE-SUR-YON, FRANCEICRA 2024: 13–17 May 2024, YOKOHAMA, JAPANRoboCup 2024: 17–22 July 2024, EINDHOVEN, NETHERLANDS

Enjoy today’s videos!

Legged robots have the potential to become vital in maintenance, home support, and exploration scenarios. In order to interact with and manipulate their environments, most legged robots are equipped with a dedicated robot arm, which means additional mass and mechanical complexity compared to standard legged robots. In this work, we explore pedipulation—using the legs of a legged robot for manipulation.

This work, by Philip Arm, Mayank Mittal, Hendrik Kolvenbach, and Marco Hutter from ETHZ RSL, will be presented at the IEEE International Conference on Robotics and Automation (ICRA 2024) in May in Japan (see events calendar above).

[ Pedipulate ]

I learned a new word today! “Stigmergy.” Stigmergy is a kind of group coordination that’s based on environmental modification. Like, when insects leave pheromone trails, they’re not directly sending messages to other individuals, but as a group the ants are able to manifest surprisingly complex coordinated behaviors. Cool, right? Researchers are IRIDIA are exploring the possibilities for robots using stigmergy with a cool ‘artificial pheromone’ system using a UV-sensitive surface.

“Automatic design of stigmergy-based behaviors for robot swarms,” by Muhammad Salman, David Garzón Ramos, and Mauro Birattari, is published in the journal Communications Engineering.

[ Nature ] via [ IRIDIA ]

Thanks, David!

Filmed in July 2017, this video shows Atlas walking through a “hatch” on a pitching surface. This uses autonomous behaviors, with the robot not knowing about the rocking world. Robot built by Boston Dynamics for the DARPA Robotics Challenge in 2013. Software by IHMC Robotics.

[ IHMC ]

That IHMC video reminded me of the SAFFiR program for Shipboard Autonomous Firefighting Robots, which is responsible for a bunch of really cool research in partnership with the United States Naval Research Laboratory. NRL did some interesting stuff with Nexi robots from MIT and made their own videos. That effort I think didn’t get nearly enough credit for being very entertaining while communicating important robotics research.

[ NRL ]

I want more robot videos with this energy.

[ MIT CSAIL ]

Large industrial asset operators increasingly use robotics to automate hazardous work at their facilities. This has led to soaring demand for autonomous inspection solutions like ANYmal. Series production by our partner Zollner enables ANYbotics to supply our customers with the required quantities of robots.

[ ANYbotics ]

This week is Grain Bin Safety Week, and Grain Weevil is here to help.

[ Grain Weevil ]

Oof, this is some heavy, heavy deep-time stuff.

[ Onkalo ]

And now, this.

[ RozenZebet ]

Hawkeye is a real time multimodal conversation and interaction agent for the Boston Dynamics’ mobile robot Spot. Leveraging OpenAI’s experimental GPT-4 Turbo and Vision AI models, Hawkeye aims to empower everyone, from seniors to healthcare professionals in forming new and unique interactions with the world around them.

That moment at 1:07 is so relatable.

[ Hawkeye ]

Wing would really prefer that if you find one of their drones on the ground, you don’t run off with it.

[ Wing ]

The rover Artemis, developed at the DFKI Robotics Innovation Center, has been equipped with a penetrometer that measures the soil’s penetration resistance to obtain precise information about soil strength. The video showcases an initial test run with the device mounted on the robot. During this test, the robot was remotely controlled, and the maximum penetration depth was limited to 15 mm.

[ DFKI ]

To efficiently achieve complex humanoid loco-manipulation tasks in industrial contexts, we propose a combined vision-based tracker-localization interplay integrated as part of a task-space whole-body optimization control. Our approach allows humanoid robots, targeted for industrial manufacturing, to manipulate and assemble large-scale objects while walking.

[ Paper ]

We developed a novel multi-body robot (called the Two-Body Bot) consisting of two small-footprint mobile bases connected by a four bar linkage where handlebars are mounted. Each base measures only 29.2 cm wide, making the robot likely the slimmest ever developed for mobile postural assistance.

[ MIT ]

Lex Fridman interviews Marc Raibert.

[ Lex Fridman ]



Dina Genkina: Hi. I’m Dina Genkina for IEEE Spectrum‘s Fixing the Future. Before we start, I want to tell you that you can get the latest coverage from some of Spectrum’s most important beeps, including AI, Change, and Robotics, by signing up for one of our free newsletters. Just go to spectrum.ieee.org\newsletters to subscribe. Today, a guest is Dr. Benji Maruyama, a Principal Materials Research Engineer at the Air Force Research Laboratory, or AFRL. Dr. Maruyama is a materials scientist, and his research focuses on carbon nanotubes and making research go faster. But he’s also a man with a dream, a dream of a world where science isn’t something done by a select few locked away in an ivory tower, but something most people can participate in. He hopes to start what he calls the billion scientist movement by building AI-enabled research robots that are accessible to all. Benji, thank you for coming on the show.

Benji Maruyama: Thanks, Dina. Great to be with you. I appreciate the invitation.

Genkina: Yeah. So let’s set the scene a little bit for our listeners. So you advocate for this billion scientist movement. If everything works amazingly, what would this look like? Paint us a picture of how AI will help us get there.

Maruyama: Right, great. Thanks. Yeah. So one of the things as you set the scene there is right now, to be a scientist, most people need to have access to a big lab with very expensive equipment. So I think top universities, government labs, industry folks, lots of equipment. It’s like a million dollars, right, to get one of them. And frankly, just not that many of us have access to those kinds of instruments. But at the same time, there’s probably a lot of us who want to do science, right? And so how do we make it so that anyone who wants to do science can try, can have access to instruments so that they can contribute to it. So that’s the basics behind citizen science or democratization of science so that everyone can do it. And one way to think of it is what happened with 3D printing. It used to be that in order to make something, you had to have access to a machine shop or maybe get fancy tools and dyes that could cost tens of thousands of dollars a pop. Or if you wanted to do electronics, you had to have access to very expensive equipment or services. But when 3D printers came along and became very inexpensive, all of a sudden now, anyone with access to a 3D printer, so maybe in a school or a library or a makerspace could print something out. And it could be something fun, like a game piece, but it could also be something that got you to an invention, something that was maybe useful to the community, was either a prototype or an actual working device.

And so really, 3D printing democratized manufacturing, right? It made it so that many more of us could do things that before only a select few could. And so that’s where we’re trying to go with science now, is that instead of only those of us who have access to big labs, we’re building research robots. And when I say we, we’re doing it, but now there are a lot of others who are doing it as well, and I’ll get into that. But the example that we have is that we took a 3D printer that you can buy off the internet for less than $300. Plus a couple of extra parts, a webcam, a Raspberry Pi board, and a tripod really, so only four components. You can get them all for $300. Load them with open-source software that was developed by AFIT, the Air Force Institute of Technology. So Burt Peterson and Greg Captain [inaudible]. We worked together to build this fully autonomous 3D printing robot that taught itself how to print to better than manufacturer’s specifications. So that was a really fun advance for us, and now we’re trying to take that same idea and broaden it. So I’ll turn it back over to you.

Genkina: Yeah, okay. So maybe let’s talk a little bit about this automated research robot that you’ve made. So right now, it works with a 3D printer, but is the big picture that one day it’s going to give people access to that million dollar lab? How would that look like?

Maruyama: Right, so there are different models out there. One, we just did a workshop at the University of— sorry, North Carolina State University about that very problem, right? So there’s two models. One is to get low-cost scientific tools like the 3D printer. There’s a couple of different chemistry robots, one out of University of Maryland and NIST, one out of University of Washington that are in the sort of 300 to 1,000 dollars range that makes it accessible. The other part is kind of the user facility model. So in the US, the Department of Energy National Labs have many user facilities where you can apply to get time on very expensive instruments. Now we’re talking tens of millions. For example, Brookhaven has a synchrotron light source where you can sign up and it doesn’t cost you any money to use the facility. And you can get days on that facility. And so that’s already there, but now the advances are that by using this, autonomy, autonomous closed loop experimentation, that the work that you do will be much faster and much more productive. So, for example, on ARES, our Autonomous Research System at AFRL, we actually were able to do experiments so fast that a professor who came into my lab said, it just took me aside and said, “Hey, Benji, in a week’s worth of time, I did a dissertation’s worth of research.” So maybe five years worth of research in a week. So imagine if you keep doing that week after week after week, how fast research goes. So it’s very exciting.

Genkina: Yeah, so tell us a little bit about how that works. So what’s this system that has sped up five years of research into a week and made graduate students obsolete? Not yet, not yet. How does that work? Is that the 3D printer system or is that a—

Maruyama: So we started with our system to grow carbon nanotubes. And I’ll say, actually, when we first thought about it, your comment about graduate students being absolute— obsolete, sorry, is interesting and important because, when we first built our system that worked it 100 times faster than normal, I thought that might be the case. We called it sort of graduate student out of the loop. But when I started talking with people who specialize in autonomy, it’s actually the opposite, right? It’s actually empowering graduate students to go faster and also to do the work that they want to do, right? And so just to digress a little bit, if you think about farmers before the Industrial Revolution, what were they doing? They were plowing fields with oxen and beasts of burden and hand plows. And it was hard work. And now, of course, you wouldn’t ask a farmer today to give up their tractor or their combine harvester, right? They would say, of course not. So very soon, we expect it to be the same for researchers, that if you asked a graduate student to give up their autonomous research robot five years from now, they’ll say, “Are you crazy? This is how I get my work done.”

But for our original ARES system, it worked on the synthesis of carbon nanotubes. So that meant that what we’re doing is trying to take this system that’s been pretty well studied, but we haven’t figured out how to make it at scale. So at hundreds of millions of tons per year, sort of like polyethylene production. And part of that is because it’s slow, right? One experiment takes a day, but also because there are just so many different ways to do a reaction, so many different combinations of temperature and pressure and a dozen different gases and half the periodic table as far as the catalyst. It’s just too much to just brute force your way through. So even though we went from experiments where we could do 100 experiments a day instead of one experiment a day, just that combinatorial space was vastly overwhelmed our ability to do it, even with many research robots or many graduate students. So the idea of having artificial intelligence algorithms that drive the research is key. And so that ability to do an experiment, see what happened, and then analyze it, iterate, and constantly be able to choose the optimal next best experiment to do is where ARES really shines. And so that’s what we did. ARES taught itself how to grow carbon nanotubes at controlled rates. And we were the first ones to do that for material science in our 2016 publication.

Genkina: That’s very exciting. So maybe we can peer under the hood a little bit of this AI model. How does the magic work? How does it pick the next best point to take and why it’s better than you could do as a graduate student or researcher?

Maruyama: Yeah, and so I think it’s interesting, right? In science, a lot of times we’re taught to hold everything constant, change one variable at a time, search over that entire space, see what happened, and then go back and try something else, right? So we reduce it to one variable at a time. It’s a reductionist approach. And that’s worked really well, but a lot of the problems that we want to go after are simply too complex for that reductionist approach. And so the benefit of being able to use artificial intelligence is that high dimensionality is no problem, right? Tens of dimensions search over very complex high-dimensional parameter space, which is overwhelming to humans, right? Is just basically bread and butter for AI. The other part to it is the iterative part. The beauty of doing autonomous experimentation is that you’re constantly iterating. You’re constantly learning over what just happened. You might also say, well, not only do I know what happened experimentally, but I have other sources of prior knowledge, right? So for example, ideal gas law says that this should happen, right? Or Gibbs phase rule might say, this can happen or this can’t happen. So you can use that prior knowledge to say, “Okay, I’m not going to do those experiments because that’s not going to work. I’m going to try here because this has the best chance of working.”

And within that, there are many different machine learning or artificial intelligence algorithms. Bayesian optimization is a popular one to help you choose what experiment is best. There’s also new AI that people are trying to develop to get better search.

Genkina: Cool. And so the software part of this autonomous robot is available for anyone to download, which is also really exciting. So what would someone need to do to be able to use that? Do they need to get a 3D printer and a Raspberry Pi and set it up? And what would they be able to do with it? Can they just build carbon nanotubes or can they do more stuff?

Maruyama: Right. So what we did, we built ARES OS, which is our open source software, and we’ll make sure to get you the GitHub link so that anyone can download it. And the idea behind ARES OS is that it provides a software framework for anyone to build their own autonomous research robot. And so the 3D printing example will be out there soon. But it’s the starting point. Of course, if you want to build your own new kind of robot, you still have to do the software development, for example, to link the ARES framework, the core, if you will, to your particular hardware, maybe your particular camera or 3D printer, or pipetting robot, or spectrometer, whatever that is. We have examples out there and we’re hoping to get to a point where it becomes much more user-friendly. So having direct Python connects so that you don’t— currently it’s programmed in C#. But to make it more accessible, we’d like it to be set up so that if you can do Python, you can probably have good success in building your own research robot.

Genkina: Cool. And you’re also working on a educational version of this, I understand. So what’s the status of that and what’s different about that version?

Maruyama: Yeah, right. So the educational version is going to be-- its sort of composition of a combination of hardware and software. So what we’re starting with is a low-cost 3D printer. And we’re collaborating now with the University at Buffalo, Materials Design Innovation Department. And we’re hoping to build up a robot based on a 3D printer. And we’ll see how it goes. It’s still evolving. But for example, it could be based on this very inexpensive $200 3D printer. It’s an Ender 3D printer. There’s another printer out there that’s based on University of Washington’s Jubilee printer. And that’s a very exciting development as well. So professors Lilo Pozzo and Nadya Peek at the University of Washington built this Jubilee robot with that idea of accessibility in mind. And so combining our ARES OS software with their Jubilee robot hardware is something that I’m very excited about and hope to be able to move forward on.

Genkina: What’s this Jubilee 3D printer? How is it different from a regular 3D printer?

Maruyama: It’s very open source. Not all 3D printers are open source and it’s based on a gantry system with interchangeable heads. So for example, you can get not just a 3D printing head, but other heads that might do things like do indentation, see how stiff something is, or maybe put a camera on there that can move around. And so it’s the flexibility of being able to pick different heads dynamically that I think makes it super useful. For the software, right, we have to have a good, accessible, user-friendly graphical user interface, a GUI. That takes time and effort, so we want to work on that. But again, that’s just the hardware software. Really to make ARES a good educational platform, we need to make it so that a teacher who’s interested can have the lowest activation barrier possible, right? We want she or he to be able to pull a lesson plan off of the internet, have supporting YouTube videos, and actually have the material that is a fully developed curriculum that’s mapped against state standards.

So that, right now, if you’re a teacher who— let’s face it, teachers are already overwhelmed with all that they have to do, putting something like this into their curriculum can be a lot of work, especially if you have to think about, well, I’m going to take all this time, but I also have to meet all of my teaching standards, all the state curriculum standards. And so if we build that out so that it’s a matter of just looking at the curriculum and just checking off the boxes of what state standards it maps to, then that makes it that much easier for the teacher to teach.

Genkina: Great. And what do you think is the timeline? Do you expect to be able to do this sometime in the coming year?

Maruyama: That’s right. These things always take longer than hoped for than expected, but we’re hoping to do it within this calendar year and very excited to get it going. And I would say for your listeners, if you’re interested in working together, please let me know. We’re very excited about trying to involve as many people as we can.

Genkina: Great. Okay, so you have the educational version, and you have the more research geared version, and you’re working on making this educational version more accessible. Is there something with the research version that you’re working on next, how you’re hoping to upgrade it, or is there something you’re using it for right now that you’re excited about?

There’s a number of things that we are very excited about the possibility of carbon nanotubes being produced at very large scale. So right now, people may remember carbon nanotubes as that great material that sort of never made it and was very overhyped. But there’s a core group of us who are still working on it because of the important promise of that material. So it’s material that is super strong, stiff, lightweight, electrically conductive. Much better than silicon as a digital electronics compute material. All of those great things, except we’re not making it at large enough scale. It’s actually used pretty significantly in lithium-ion batteries. It’s an important application. But other than that, it’s sort of like where’s my flying car? It’s never panned out. But there’s, as I said, a group of us who are working to really produce carbon nanotubes at much larger scale. So large scale for nanotubes now is sort of in the kilogram or ton scale. But what we need to get to is hundreds of millions of tons per year production rates. And why is that? Well, there’s a great effort that came out of ARPA-E. So the Department of Energy Advanced Research Projects Agency and the E is for Energy in that case.

So they funded a collaboration between Shell Oil and Rice University to pyrolyze methane, so natural gas into hydrogen for the hydrogen economy. So now that’s a clean burning fuel plus carbon. And instead of burning the carbon to CO2, which is what we now do, right? We just take natural gas and feed it through a turbine and generate electric power instead of— and that, by the way, generates so much CO2 that it’s causing global climate change. So if we can do that pyrolysis at scale, at hundreds of millions of tons per year, it’s literally a save the world proposition, meaning that we can avoid so much CO2 emissions that we can reduce global CO2 emissions by 20 to 40 percent. And that is the save the world proposition. It’s a huge undertaking, right? That’s a big problem to tackle, starting with the science. We still don’t have the science to efficiently and effectively make carbon nanotubes at that scale. And then, of course, we have to take the material and turn it into useful products. So the batteries is the first example, but thinking about replacing copper for electrical wire, replacing steel for structural materials, aluminum, all those kinds of applications. But we can’t do it. We can’t even get to that kind of development because we haven’t been able to make the carbon nanotubes at sufficient scale.

So I would say that’s something that I’m working on now that I’m very excited about and trying to get there, but it’s going to take some good developments in our research robots and some very smart people to get us there.

Genkina: Yeah, it seems so counterintuitive that making everything out of carbon is good for lowering carbon emissions, but I guess that’s the break.

Maruyama: Yeah, it is interesting, right? So people talk about carbon emissions, but really, the molecule that’s causing global warming is carbon dioxide, CO2, which you get from burning carbon. And so if you take that methane and parallelize it to carbon nanotubes, that carbon is now sequestered, right? It’s not going off as CO2. It’s staying in solid state. And not only is it just not going up into the atmosphere, but now we’re using it to replace steel, for example, which, by the way, steel, aluminum, copper production, all of those things emit lots of CO2 in their production, right? They’re energy intensive as a material production. So it’s kind of ironic.

Genkina: Okay, and are there any other research robots that you’re excited about that you think are also contributing to this democratization of science process?

Maruyama: Yeah, so we talked about Jubilee, the NIST robot, which is from Professor Ichiro Takeuchi at Maryland and Gilad Kusne at NIST, National Institute of Standards and Technology. Theirs is fun too. It’s LEGO as. So it’s actually based on a LEGO robotics platform. So it’s an actual chemistry robot built out of Legos. So I think that’s fun as well. And you can imagine, just like we have LEGO robot competitions, we can have autonomous research robot competitions where we try and do research through these robots or competitions where everybody sort of starts with the same robot, just like with LEGO robotics. So that’s fun as well. But I would say there’s a growing number of people doing these kinds of, first of all, low-cost science, accessible science, but in particular low-cost autonomous experimentation.

Genkina: So how far are we from a world where a high school student has an idea and they can just go and carry it out on some autonomous research system at some high-end lab?

Maruyama: That’s a really good question. I hope that it’s going to be in 5 to 10 years, that it becomes reasonably commonplace. But it’s going to take still some significant investment to get this going. And so we’ll see how that goes. But I don’t think there are any scientific impediments to getting this done. There is a significant amount of engineering to be done. And sometimes we hear, oh, it’s just engineering. The engineering is a significant problem. And it’s work to get some of these things accessible, low cost. But there are lots of great efforts. There are people who have used CDs, compact discs to make spectrometers out of. There are lots of good examples of citizen science out there. But it’s, I think, at this point, going to take investment in software, in hardware to make it accessible, and then importantly, getting students really up to speed on what AI is and how it works and how it can help them. And so I think it’s actually really important. So again, that’s the democratization of science is if we can make it available to everyone and accessible, then that helps people, everyone contribute to science. And I do believe that there are important contributions to be made by ordinary citizens, by people who aren’t you know PhDs working in a lab.

And I think there’s a lot of science out there to be done. If you ask working scientists, almost no one has run out of ideas or things they want to work on. There’s many more scientific problems to work on than we have the time where people are funding to work on. And so if we make science cheaper to do, then all of a sudden, more people can do science. And so those questions start to be resolved. And so I think that’s super important. And now we have, instead of, just those of us who work in big labs, you have millions, tens of millions, up to a billion people, that’s the billion scientist idea, who are contributing to the scientific community. And that, to me, is so powerful that many more of us can contribute than just the few of us who do it right now.

Genkina: Okay, that’s a great place to end on, I think. So, today we spoke to Dr. Benji Maruyama, a material scientist at AFRL, about his efforts to democratize scientific discovery through automated research robots. For IEEE Spectrum, I’m Dina Genkina, and I hope you’ll join us next time on Fixing the Future.



Dina Genkina: Hi. I’m Dina Genkina for IEEE Spectrum‘s Fixing the Future. Before we start, I want to tell you that you can get the latest coverage from some of Spectrum’s most important beeps, including AI, Change, and Robotics, by signing up for one of our free newsletters. Just go to spectrum.ieee.org\newsletters to subscribe. Today, a guest is Dr. Benji Maruyama, a Principal Materials Research Engineer at the Air Force Research Laboratory, or AFRL. Dr. Maruyama is a materials scientist, and his research focuses on carbon nanotubes and making research go faster. But he’s also a man with a dream, a dream of a world where science isn’t something done by a select few locked away in an ivory tower, but something most people can participate in. He hopes to start what he calls the billion scientist movement by building AI-enabled research robots that are accessible to all. Benji, thank you for coming on the show.

Benji Maruyama: Thanks, Dina. Great to be with you. I appreciate the invitation.

Genkina: Yeah. So let’s set the scene a little bit for our listeners. So you advocate for this billion scientist movement. If everything works amazingly, what would this look like? Paint us a picture of how AI will help us get there.

Maruyama: Right, great. Thanks. Yeah. So one of the things as you set the scene there is right now, to be a scientist, most people need to have access to a big lab with very expensive equipment. So I think top universities, government labs, industry folks, lots of equipment. It’s like a million dollars, right, to get one of them. And frankly, just not that many of us have access to those kinds of instruments. But at the same time, there’s probably a lot of us who want to do science, right? And so how do we make it so that anyone who wants to do science can try, can have access to instruments so that they can contribute to it. So that’s the basics behind citizen science or democratization of science so that everyone can do it. And one way to think of it is what happened with 3D printing. It used to be that in order to make something, you had to have access to a machine shop or maybe get fancy tools and dyes that could cost tens of thousands of dollars a pop. Or if you wanted to do electronics, you had to have access to very expensive equipment or services. But when 3D printers came along and became very inexpensive, all of a sudden now, anyone with access to a 3D printer, so maybe in a school or a library or a makerspace could print something out. And it could be something fun, like a game piece, but it could also be something that got you to an invention, something that was maybe useful to the community, was either a prototype or an actual working device.

And so really, 3D printing democratized manufacturing, right? It made it so that many more of us could do things that before only a select few could. And so that’s where we’re trying to go with science now, is that instead of only those of us who have access to big labs, we’re building research robots. And when I say we, we’re doing it, but now there are a lot of others who are doing it as well, and I’ll get into that. But the example that we have is that we took a 3D printer that you can buy off the internet for less than $300. Plus a couple of extra parts, a webcam, a Raspberry Pi board, and a tripod really, so only four components. You can get them all for $300. Load them with open-source software that was developed by AFIT, the Air Force Institute of Technology. So Burt Peterson and Greg Captain [inaudible]. We worked together to build this fully autonomous 3D printing robot that taught itself how to print to better than manufacturer’s specifications. So that was a really fun advance for us, and now we’re trying to take that same idea and broaden it. So I’ll turn it back over to you.

Genkina: Yeah, okay. So maybe let’s talk a little bit about this automated research robot that you’ve made. So right now, it works with a 3D printer, but is the big picture that one day it’s going to give people access to that million dollar lab? How would that look like?

Maruyama: Right, so there are different models out there. One, we just did a workshop at the University of— sorry, North Carolina State University about that very problem, right? So there’s two models. One is to get low-cost scientific tools like the 3D printer. There’s a couple of different chemistry robots, one out of University of Maryland and NIST, one out of University of Washington that are in the sort of 300 to 1,000 dollars range that makes it accessible. The other part is kind of the user facility model. So in the US, the Department of Energy National Labs have many user facilities where you can apply to get time on very expensive instruments. Now we’re talking tens of millions. For example, Brookhaven has a synchrotron light source where you can sign up and it doesn’t cost you any money to use the facility. And you can get days on that facility. And so that’s already there, but now the advances are that by using this, autonomy, autonomous closed loop experimentation, that the work that you do will be much faster and much more productive. So, for example, on ARES, our Autonomous Research System at AFRL, we actually were able to do experiments so fast that a professor who came into my lab said, it just took me aside and said, “Hey, Benji, in a week’s worth of time, I did a dissertation’s worth of research.” So maybe five years worth of research in a week. So imagine if you keep doing that week after week after week, how fast research goes. So it’s very exciting.

Genkina: Yeah, so tell us a little bit about how that works. So what’s this system that has sped up five years of research into a week and made graduate students obsolete? Not yet, not yet. How does that work? Is that the 3D printer system or is that a—

Maruyama: So we started with our system to grow carbon nanotubes. And I’ll say, actually, when we first thought about it, your comment about graduate students being absolute— obsolete, sorry, is interesting and important because, when we first built our system that worked it 100 times faster than normal, I thought that might be the case. We called it sort of graduate student out of the loop. But when I started talking with people who specialize in autonomy, it’s actually the opposite, right? It’s actually empowering graduate students to go faster and also to do the work that they want to do, right? And so just to digress a little bit, if you think about farmers before the Industrial Revolution, what were they doing? They were plowing fields with oxen and beasts of burden and hand plows. And it was hard work. And now, of course, you wouldn’t ask a farmer today to give up their tractor or their combine harvester, right? They would say, of course not. So very soon, we expect it to be the same for researchers, that if you asked a graduate student to give up their autonomous research robot five years from now, they’ll say, “Are you crazy? This is how I get my work done.”

But for our original ARES system, it worked on the synthesis of carbon nanotubes. So that meant that what we’re doing is trying to take this system that’s been pretty well studied, but we haven’t figured out how to make it at scale. So at hundreds of millions of tons per year, sort of like polyethylene production. And part of that is because it’s slow, right? One experiment takes a day, but also because there are just so many different ways to do a reaction, so many different combinations of temperature and pressure and a dozen different gases and half the periodic table as far as the catalyst. It’s just too much to just brute force your way through. So even though we went from experiments where we could do 100 experiments a day instead of one experiment a day, just that combinatorial space was vastly overwhelmed our ability to do it, even with many research robots or many graduate students. So the idea of having artificial intelligence algorithms that drive the research is key. And so that ability to do an experiment, see what happened, and then analyze it, iterate, and constantly be able to choose the optimal next best experiment to do is where ARES really shines. And so that’s what we did. ARES taught itself how to grow carbon nanotubes at controlled rates. And we were the first ones to do that for material science in our 2016 publication.

Genkina: That’s very exciting. So maybe we can peer under the hood a little bit of this AI model. How does the magic work? How does it pick the next best point to take and why it’s better than you could do as a graduate student or researcher?

Maruyama: Yeah, and so I think it’s interesting, right? In science, a lot of times we’re taught to hold everything constant, change one variable at a time, search over that entire space, see what happened, and then go back and try something else, right? So we reduce it to one variable at a time. It’s a reductionist approach. And that’s worked really well, but a lot of the problems that we want to go after are simply too complex for that reductionist approach. And so the benefit of being able to use artificial intelligence is that high dimensionality is no problem, right? Tens of dimensions search over very complex high-dimensional parameter space, which is overwhelming to humans, right? Is just basically bread and butter for AI. The other part to it is the iterative part. The beauty of doing autonomous experimentation is that you’re constantly iterating. You’re constantly learning over what just happened. You might also say, well, not only do I know what happened experimentally, but I have other sources of prior knowledge, right? So for example, ideal gas law says that this should happen, right? Or Gibbs phase rule might say, this can happen or this can’t happen. So you can use that prior knowledge to say, “Okay, I’m not going to do those experiments because that’s not going to work. I’m going to try here because this has the best chance of working.”

And within that, there are many different machine learning or artificial intelligence algorithms. Bayesian optimization is a popular one to help you choose what experiment is best. There’s also new AI that people are trying to develop to get better search.

Genkina: Cool. And so the software part of this autonomous robot is available for anyone to download, which is also really exciting. So what would someone need to do to be able to use that? Do they need to get a 3D printer and a Raspberry Pi and set it up? And what would they be able to do with it? Can they just build carbon nanotubes or can they do more stuff?

Maruyama: Right. So what we did, we built ARES OS, which is our open source software, and we’ll make sure to get you the GitHub link so that anyone can download it. And the idea behind ARES OS is that it provides a software framework for anyone to build their own autonomous research robot. And so the 3D printing example will be out there soon. But it’s the starting point. Of course, if you want to build your own new kind of robot, you still have to do the software development, for example, to link the ARES framework, the core, if you will, to your particular hardware, maybe your particular camera or 3D printer, or pipetting robot, or spectrometer, whatever that is. We have examples out there and we’re hoping to get to a point where it becomes much more user-friendly. So having direct Python connects so that you don’t— currently it’s programmed in C#. But to make it more accessible, we’d like it to be set up so that if you can do Python, you can probably have good success in building your own research robot.

Genkina: Cool. And you’re also working on a educational version of this, I understand. So what’s the status of that and what’s different about that version?

Maruyama: Yeah, right. So the educational version is going to be-- its sort of composition of a combination of hardware and software. So what we’re starting with is a low-cost 3D printer. And we’re collaborating now with the University at Buffalo, Materials Design Innovation Department. And we’re hoping to build up a robot based on a 3D printer. And we’ll see how it goes. It’s still evolving. But for example, it could be based on this very inexpensive $200 3D printer. It’s an Ender 3D printer. There’s another printer out there that’s based on University of Washington’s Jubilee printer. And that’s a very exciting development as well. So professors Lilo Pozzo and Nadya Peek at the University of Washington built this Jubilee robot with that idea of accessibility in mind. And so combining our ARES OS software with their Jubilee robot hardware is something that I’m very excited about and hope to be able to move forward on.

Genkina: What’s this Jubilee 3D printer? How is it different from a regular 3D printer?

Maruyama: It’s very open source. Not all 3D printers are open source and it’s based on a gantry system with interchangeable heads. So for example, you can get not just a 3D printing head, but other heads that might do things like do indentation, see how stiff something is, or maybe put a camera on there that can move around. And so it’s the flexibility of being able to pick different heads dynamically that I think makes it super useful. For the software, right, we have to have a good, accessible, user-friendly graphical user interface, a GUI. That takes time and effort, so we want to work on that. But again, that’s just the hardware software. Really to make ARES a good educational platform, we need to make it so that a teacher who’s interested can have the lowest activation barrier possible, right? We want she or he to be able to pull a lesson plan off of the internet, have supporting YouTube videos, and actually have the material that is a fully developed curriculum that’s mapped against state standards.

So that, right now, if you’re a teacher who— let’s face it, teachers are already overwhelmed with all that they have to do, putting something like this into their curriculum can be a lot of work, especially if you have to think about, well, I’m going to take all this time, but I also have to meet all of my teaching standards, all the state curriculum standards. And so if we build that out so that it’s a matter of just looking at the curriculum and just checking off the boxes of what state standards it maps to, then that makes it that much easier for the teacher to teach.

Genkina: Great. And what do you think is the timeline? Do you expect to be able to do this sometime in the coming year?

Maruyama: That’s right. These things always take longer than hoped for than expected, but we’re hoping to do it within this calendar year and very excited to get it going. And I would say for your listeners, if you’re interested in working together, please let me know. We’re very excited about trying to involve as many people as we can.

Genkina: Great. Okay, so you have the educational version, and you have the more research geared version, and you’re working on making this educational version more accessible. Is there something with the research version that you’re working on next, how you’re hoping to upgrade it, or is there something you’re using it for right now that you’re excited about?

There’s a number of things that we are very excited about the possibility of carbon nanotubes being produced at very large scale. So right now, people may remember carbon nanotubes as that great material that sort of never made it and was very overhyped. But there’s a core group of us who are still working on it because of the important promise of that material. So it’s material that is super strong, stiff, lightweight, electrically conductive. Much better than silicon as a digital electronics compute material. All of those great things, except we’re not making it at large enough scale. It’s actually used pretty significantly in lithium-ion batteries. It’s an important application. But other than that, it’s sort of like where’s my flying car? It’s never panned out. But there’s, as I said, a group of us who are working to really produce carbon nanotubes at much larger scale. So large scale for nanotubes now is sort of in the kilogram or ton scale. But what we need to get to is hundreds of millions of tons per year production rates. And why is that? Well, there’s a great effort that came out of ARPA-E. So the Department of Energy Advanced Research Projects Agency and the E is for Energy in that case.

So they funded a collaboration between Shell Oil and Rice University to pyrolyze methane, so natural gas into hydrogen for the hydrogen economy. So now that’s a clean burning fuel plus carbon. And instead of burning the carbon to CO2, which is what we now do, right? We just take natural gas and feed it through a turbine and generate electric power instead of— and that, by the way, generates so much CO2 that it’s causing global climate change. So if we can do that pyrolysis at scale, at hundreds of millions of tons per year, it’s literally a save the world proposition, meaning that we can avoid so much CO2 emissions that we can reduce global CO2 emissions by 20 to 40 percent. And that is the save the world proposition. It’s a huge undertaking, right? That’s a big problem to tackle, starting with the science. We still don’t have the science to efficiently and effectively make carbon nanotubes at that scale. And then, of course, we have to take the material and turn it into useful products. So the batteries is the first example, but thinking about replacing copper for electrical wire, replacing steel for structural materials, aluminum, all those kinds of applications. But we can’t do it. We can’t even get to that kind of development because we haven’t been able to make the carbon nanotubes at sufficient scale.

So I would say that’s something that I’m working on now that I’m very excited about and trying to get there, but it’s going to take some good developments in our research robots and some very smart people to get us there.

Genkina: Yeah, it seems so counterintuitive that making everything out of carbon is good for lowering carbon emissions, but I guess that’s the break.

Maruyama: Yeah, it is interesting, right? So people talk about carbon emissions, but really, the molecule that’s causing global warming is carbon dioxide, CO2, which you get from burning carbon. And so if you take that methane and parallelize it to carbon nanotubes, that carbon is now sequestered, right? It’s not going off as CO2. It’s staying in solid state. And not only is it just not going up into the atmosphere, but now we’re using it to replace steel, for example, which, by the way, steel, aluminum, copper production, all of those things emit lots of CO2 in their production, right? They’re energy intensive as a material production. So it’s kind of ironic.

Genkina: Okay, and are there any other research robots that you’re excited about that you think are also contributing to this democratization of science process?

Maruyama: Yeah, so we talked about Jubilee, the NIST robot, which is from Professor Ichiro Takeuchi at Maryland and Gilad Kusne at NIST, National Institute of Standards and Technology. Theirs is fun too. It’s LEGO as. So it’s actually based on a LEGO robotics platform. So it’s an actual chemistry robot built out of Legos. So I think that’s fun as well. And you can imagine, just like we have LEGO robot competitions, we can have autonomous research robot competitions where we try and do research through these robots or competitions where everybody sort of starts with the same robot, just like with LEGO robotics. So that’s fun as well. But I would say there’s a growing number of people doing these kinds of, first of all, low-cost science, accessible science, but in particular low-cost autonomous experimentation.

Genkina: So how far are we from a world where a high school student has an idea and they can just go and carry it out on some autonomous research system at some high-end lab?

Maruyama: That’s a really good question. I hope that it’s going to be in 5 to 10 years, that it becomes reasonably commonplace. But it’s going to take still some significant investment to get this going. And so we’ll see how that goes. But I don’t think there are any scientific impediments to getting this done. There is a significant amount of engineering to be done. And sometimes we hear, oh, it’s just engineering. The engineering is a significant problem. And it’s work to get some of these things accessible, low cost. But there are lots of great efforts. There are people who have used CDs, compact discs to make spectrometers out of. There are lots of good examples of citizen science out there. But it’s, I think, at this point, going to take investment in software, in hardware to make it accessible, and then importantly, getting students really up to speed on what AI is and how it works and how it can help them. And so I think it’s actually really important. So again, that’s the democratization of science is if we can make it available to everyone and accessible, then that helps people, everyone contribute to science. And I do believe that there are important contributions to be made by ordinary citizens, by people who aren’t you know PhDs working in a lab.

And I think there’s a lot of science out there to be done. If you ask working scientists, almost no one has run out of ideas or things they want to work on. There’s many more scientific problems to work on than we have the time where people are funding to work on. And so if we make science cheaper to do, then all of a sudden, more people can do science. And so those questions start to be resolved. And so I think that’s super important. And now we have, instead of, just those of us who work in big labs, you have millions, tens of millions, up to a billion people, that’s the billion scientist idea, who are contributing to the scientific community. And that, to me, is so powerful that many more of us can contribute than just the few of us who do it right now.

Genkina: Okay, that’s a great place to end on, I think. So, today we spoke to Dr. Benji Maruyama, a material scientist at AFRL, about his efforts to democratize scientific discovery through automated research robots. For IEEE Spectrum, I’m Dina Genkina, and I hope you’ll join us next time on Fixing the Future.

Background: Assistive Robotic Arms are designed to assist physically disabled people with daily activities. Existing joysticks and head controls are not applicable for severely disabled people such as people with Locked-in Syndrome. Therefore, eye tracking control is part of ongoing research. The related literature spans many disciplines, creating a heterogeneous field that makes it difficult to gain an overview.

Objectives: This work focuses on ARAs that are controlled by gaze and eye movements. By answering the research questions, this paper provides details on the design of the systems, a comparison of input modalities, methods for measuring the performance of these controls, and an outlook on research areas that gained interest in recent years.

Methods: This review was conducted as outlined in the PRISMA 2020 Statement. After identifying a wide range of approaches in use the authors decided to use the PRISMA-ScR extension for a scoping review to present the results. The identification process was carried out by screening three databases. After the screening process, a snowball search was conducted.

Results: 39 articles and 6 reviews were included in this article. Characteristics related to the system and study design were extracted and presented divided into three groups based on the use of eye tracking.

Conclusion: This paper aims to provide an overview for researchers new to the field by offering insight into eye tracking based robot controllers. We have identified open questions that need to be answered in order to provide people with severe motor function loss with systems that are highly useable and accessible.

In recent years, the development of robots that can engage in non-task-oriented dialogue with people, such as chat, has received increasing attention. This study aims to clarify the factors that improve the user’s willingness to talk with robots in non-task oriented dialogues (e.g., chat). A previous study reported that exchanging subjective opinions makes such dialogue enjoyable and enthusiastic. In some cases, however, the robot’s subjective opinions are not realistic, i.e., the user believes the robot does not have opinions, thus we cannot attribute the opinion to the robot. For example, if a robot says that alcohol tastes good, it may be difficult to imagine the robot having such an opinion. In this case, the user’s motivation to exchange opinions may decrease. In this study, we hypothesize that regardless of the type of robot, opinion attribution affects the user’s motivation to exchange opinions with humanoid robots. We examined the effect by preparing various opinions of two kinds of humanoid robots. The experimental result suggests that not only the users’ interest in the topic but also the attribution of the subjective opinions to them influence their motivation to exchange opinions. Another analysis revealed that the android significantly increased the motivation when they are interested in the topic and do not attribute opinions, while the small robot significantly increased it when not interested and attributed opinions. In situations where there are opinions that cannot be attributed to humanoid robots, the result that androids are more motivating when users have the interests even if opinions are not attributed can indicate the usefulness of androids.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

Cybathlon Challenges: 2 February 2024, ZURICHHRI 2024: 11–15 March 2024, BOULDER, COLO.Eurobot Open 2024: 8–11 May 2024, LA ROCHE-SUR-YON, FRANCEICRA 2024: 13–17 May 2024, YOKOHAMA, JAPAN

Enjoy today’s videos!

Just like a real human, Acrobot will sometimes kick you in the face.

[ Acrobotics ]

Thanks, Elizabeth!

You had me at “wormlike, limbless robots.”

[ GitHub ] via [ Georgia Tech ]

Filmed in July 2017, this video shows us using Atlas to put out a “fire” on our loading dock. This uses a combination of teleoperation and autonomous behaviors through a single, remote computer. Robot built by Boston Dynamics for the DARPA Robotics Challenge in 2013. Software by IHMC Robotics.

I would say that in the middle of a rainstorm is probably the best time to start a fire that you expect to be extinguished by a robot.

[ IHMC ]

We’re hard at work, but Atlas still has time for a dance break.

[ Boston Dynamics ]

This is pretty cool: BruBotics is testing its self-healing robotics gripper technology on commercial grippers from Festo.

[ Paper ] via [ BruBotics ]

Thanks, Bram!

You should read our in-depth article on Stretch 3, so if you haven’t yet, consider this as just a teaser.

[ Hello Robot ]

Inspired by caregiving experts, we proposed a bimanual interactive robotic dressing assistance scheme, which is unprecedented in previous research. In the scheme, an interactive robot joins hands with the human thus supporting/guiding the human in the dressing process, while the dressing robot performs the dressing task. This work represents a paradigm shift of thinking of the dressing assistance task from one-robot-to-one-arm to two-robot-to-one-arm.

[ Project ]

Thanks, Jihong!

Tony Punnoose Valayil from the Bulgarian Academy of Sciences Institute of Robotics wrote in to share some very low-cost hand-rehabilitation robots for home use.

In this video, we present a robot-assisted rehabilitation of the wrist joint which can aid in restoring the strength that has been lost across the upper limb due to stroke. This robot is very cost-effective and can be used for home rehabilitation.

In this video, we present an exoskeleton robot which can be used at home for rehabilitating the index and middle fingers of stroke-affected patients. This robot is built at a cost of 50 euros for patients who are not financially independent to get better treatment.

[ BAS ]

Some very impressive work here from the Norwegian University of Science and Technology (NTNU), showing a drone tracking its position using radar and lidar-based odometry in some nightmare (for robots) environments, including a long tunnel that looks the same everywhere and a hallway full of smoke.

[ Paper ] via [ GitHub ]

I’m sorry, but people should really know better than to make videos like this for social robot crowdfunding by now.

It’s on Kickstarter for about $300, and the fact that it’s been funded so quickly tells me that people have already forgotten about the social robotpocalypse.

[ Kickstarter ]

Introducing Orbit, your portal for managing asset-intensive facilities through real-time and predictive intelligence. Orbit brings a whole new suite of fleet management capabilities and will unify your ecosystem of Boston Dynamics robots, starting with Spot.

[ Boston Dynamics ]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

Cybathlon Challenges: 2 February 2024, ZURICHHRI 2024: 11–15 March 2024, BOULDER, COLO.Eurobot Open 2024: 8–11 May 2024, LA ROCHE-SUR-YON, FRANCEICRA 2024: 13–17 May 2024, YOKOHAMA, JAPAN

Enjoy today’s videos!

Just like a real human, Acrobot will sometimes kick you in the face.

[ Acrobotics ]

Thanks, Elizabeth!

You had me at “wormlike, limbless robots.”

[ GitHub ] via [ Georgia Tech ]

Filmed in July 2017, this video shows us using Atlas to put out a “fire” on our loading dock. This uses a combination of teleoperation and autonomous behaviors through a single, remote computer. Robot built by Boston Dynamics for the DARPA Robotics Challenge in 2013. Software by IHMC Robotics.

I would say that in the middle of a rainstorm is probably the best time to start a fire that you expect to be extinguished by a robot.

[ IHMC ]

We’re hard at work, but Atlas still has time for a dance break.

[ Boston Dynamics ]

This is pretty cool: BruBotics is testing its self-healing robotics gripper technology on commercial grippers from Festo.

[ Paper ] via [ BruBotics ]

Thanks, Bram!

You should read our in-depth article on Stretch 3, so if you haven’t yet, consider this as just a teaser.

[ Hello Robot ]

Inspired by caregiving experts, we proposed a bimanual interactive robotic dressing assistance scheme, which is unprecedented in previous research. In the scheme, an interactive robot joins hands with the human thus supporting/guiding the human in the dressing process, while the dressing robot performs the dressing task. This work represents a paradigm shift of thinking of the dressing assistance task from one-robot-to-one-arm to two-robot-to-one-arm.

[ Project ]

Thanks, Jihong!

Tony Punnoose Valayil from the Bulgarian Academy of Sciences Institute of Robotics wrote in to share some very low-cost hand-rehabilitation robots for home use.

In this video, we present a robot-assisted rehabilitation of the wrist joint which can aid in restoring the strength that has been lost across the upper limb due to stroke. This robot is very cost-effective and can be used for home rehabilitation.

In this video, we present an exoskeleton robot which can be used at home for rehabilitating the index and middle fingers of stroke-affected patients. This robot is built at a cost of 50 euros for patients who are not financially independent to get better treatment.

[ BAS ]

Some very impressive work here from the Norwegian University of Science and Technology (NTNU), showing a drone tracking its position using radar and lidar-based odometry in some nightmare (for robots) environments, including a long tunnel that looks the same everywhere and a hallway full of smoke.

[ Paper ] via [ GitHub ]

I’m sorry, but people should really know better than to make videos like this for social robot crowdfunding by now.

It’s on Kickstarter for about $300, and the fact that it’s been funded so quickly tells me that people have already forgotten about the social robotpocalypse.

[ Kickstarter ]

Introducing Orbit, your portal for managing asset-intensive facilities through real-time and predictive intelligence. Orbit brings a whole new suite of fleet management capabilities and will unify your ecosystem of Boston Dynamics robots, starting with Spot.

[ Boston Dynamics ]



A lot has happened in robotics over the last year. Everyone is wondering how AI will transform robotics, and everyone else is wondering whether humanoids are going to blow it or not, and the rest of us are busy trying not to get completely run over as things shake out however they’re going to shake out.

Meanwhile, over at Hello Robot, they’ve been focused on making their Stretch robot do useful things while also being affordable and reliable and affordable and expandable and affordable and community-friendly and affordable. Which are some really hard and important problems that can sometimes get overwhelmed by flashier things.

Today, Hello Robot is announcing Stretch 3, which provides a suite of upgrades to what they (quite accurately) call “the world’s only lightweight, capable, developer-friendly mobile manipulator.” And impressively, they’ve managed to do it without forgetting about that whole “affordable” part.

Hello Robot

Stretch 3 looks about the same as the previous versions, but there are important upgrades that are worth highlighting. The most impactful: Stretch 3 now comes with the dexterous wrist kit that used to be an add-on, and it now also includes an Intel Realsense D405 camera mounted right behind the gripper, which is a huge help for both autonomy and remote teleoperation—a useful new feature shipping with Stretch 3 that’s based on research out of Maya Cakmak’s lab at the University of Washington, in Seattle. This is an example of turning innovation from the community of Stretch users into product features, a product-development approach that seems to be working well for Hello Robot.

“We’ve really been learning from our community,” says Hello Robot cofounder and CEO Aaron Edsinger. “In the past year, we’ve seen a real uptick in publications, and it feels like we’re getting to this critical-mass moment with Stretch. So with Stretch 3, it’s about implementing features that our community has been asking us for.”

“When we launched, we didn’t have a dexterous wrist at the end as standard, because we were trying to start with truly the minimum viable product,” says Hello Robot cofounder and CTO Charlie Kemp. “And what we found is that almost every order was adding the dexterous wrist, and by actually having it come in standard, we’ve been able to devote more attention to it and make it a much more robust and capable system.”

Kemp says that having Stretch do everything right out of the box (with Hello Robot support) makes a big difference for their research customers. “Making it easier for people to try things—we’ve learned to really value that, because the more steps that people have to go through to experience it, the less likely they are to build on it.” In a research context, this is important because what you’re really talking about is time: The more time people spend just trying to make the robot function, the less time they’ll spend getting the robot to do useful things.

Hello Robot

At this point, you may be thinking of Stretch as a research platform. Or you may be thinking of Stretch as a robot for people with disabilities, if you read our November 2023 cover story about Stretch and Henry and Jane Evans. And the robot is definitely both of those things. But Hello Robot stresses that these specific markets are not their end goal—they see Stretch as a generalist mobile manipulator with a future in the home, as suggested by this Stretch 3 promo video:

Hello Robot

Dishes, laundry, bubble cannons: All of these are critical to the functionality of any normal household. “Stretch is an inclusive robot,” says Kemp. “It’s not just for older adults or people with disabilities. We want a robot that can be beneficial for everyone. Our vision, and what we believe will really happen, whether it’s us or someone else, is that there is going to be a versatile, general-purpose home robot. Right now, clearly, our market is not yet consumers in the home. But that’s where we want to go.”

Robots in the home have been promised for decades, and with the notable exception of the Roomba, there has not been a lot of success. The idea of a robot that could handle dishes or laundry is tempting, but is it near-term or medium-term realistic? Edsinger, who has been at this whole robots thing for a very long time, is an optimist about this, and about the role that Stretch will play. “There are so many places where you can see the progress happening—in sensing, in manipulation,” Edsinger says. “I can imagine those things coming together now in a way that I could not have 5 to 10 years ago, when it seemed so incredibly hard.”

“We’re very pragmatic about what is possible. And I think that we do believe that things are changing faster than we anticipated—10 years ago, I had a pretty clear linear path in mind for robotics, but it’s hard to really imagine where we’ll be in terms of robot capabilities 10 years from now.” —Aaron Edsinger, Hello Robot

I’d say that it’s still incredibly hard, but Edsinger is right that a lot of the pieces do seem to be coming together. Arguably, the hardware is the biggest challenge here, because working in a home puts heavy constraints on what kind of hardware you’re able to use. You’re not likely to see a humanoid in a home anytime soon, because they’d actually be dangerous, and even a quadruped is likely to be more trouble than it’s worth in a home environment. Hello Robot is conscious of this, and that’s been one of the main drivers of the design of Stretch.

“I think the portability of Stretch is really worth highlighting because there’s just so much value in that which is maybe not obvious,” Edsinger tells us. Being able to just pick up and move a mobile manipulator is not normal. Stretch’s weight (24.5 kilograms) is almost trivial to work with, in sharp contrast with virtually every other mobile robot with an arm: Stretch fits into places that humans fit into, and manages to have a similar workspace as well, and its bottom-heavy design makes it safe for humans to be around. It can’t climb stairs, but it can be carried upstairs, which is a bigger deal than it may seem. It’ll fit in the back of a car, too. Stretch is built to explore the world—not just some facsimile of the world in a research lab.

NYU students have been taking Stretch into tens of homes around New York,” says Edsinger. “They carried one up a four-story walk-up. This enables real in-home data collection. And this is where home robots will start to happen—when you can have hundreds of these out there in homes collecting data for machine learning.”

“That’s where the opportunity is,” adds Kemp. “It’s that engagement with the world about where to apply the technology beneficially. And if you’re in a lab, you’re not going to find it.”

We’ve seen some compelling examples of this recently, with Mobile ALOHA. These are robots learning to be autonomous by having humans teleoperate them through common household skills. But the system isn’t particularly portable, and it costs nearly US $32,000 in parts alone. Don’t get me wrong: I love the research. It’s just going to be difficult to scale, and in order to collect enough data to effectively tackle the world, scale is critical. Stretch is much easier to scale, because you can just straight up buy one.

Or two! You may have noticed that some of the Stretch 3 videos have two robots in them, collaborating with each other. This is not yet autonomous, but with two robots, a single human (or a pair of humans) can teleoperate them as if they were effectively a single two-armed robot:

Hello Robot

Essentially, what you’ve got here is a two-armed robot that (very intentionally) has nothing to do with humanoids. As Kemp explains: “We’re trying to help our community and the world see that there is a different path from the human model. We humans tend to think of the preexisting solution: People have two arms, so we think, well, I’m going to need to have two arms on my robot or it’s going to have all these issues.” Kemp points out that robots like Stretch have shown that really quite a lot of things can be done with only one arm, but two arms can still be helpful for a substantial subset of common tasks. “The challenge for us, which I had just never been able to find a solution for, was how you get two arms into a portable, compact, affordable lightweight mobile manipulator. You can’t!”

But with two Stretches, you have not only two arms but also two shoulders that you can put wherever you want. Washing a dish? You’ll probably want two arms close together for collaborative manipulation. Making a bed? Put the two arms far apart to handle both sides of a sheet at once. It’s a sort of distributed on-demand bimanual manipulation, which certainly adds a little bit of complexity but also solves a bunch of problems when it comes to practical in-home manipulation. Oh—and if those teleop tools look like modified kitchen tongs, that’s because they’re modified kitchen tongs.

Of course, buying two Stretch robots is twice as expensive as buying a single Stretch robot, and even though Stretch 3’s cost of just under $25,000 is very inexpensive for a mobile manipulator and very affordable in a research or education context, we’re still pretty far from something that most people would be able to afford for themselves. Hello Robot says that producing robots at scale is the answer here, which I’m sure is true, but it can be a difficult thing for a small company to achieve.

Moving slowly toward scale is at least partly intentional, Kemp tells us. “We’re still in the process of discovering Stretch’s true form—what the robot really should be. If we tried to scale to make lots and lots of robots at a much lower cost before we fundamentally understood what the needs and challenges were going to be, I think it would be a mistake. And there are many gravestones out there for various home-robotics companies, some of which I truly loved. We don’t want to become one of those.”

This is not to say that Hello Robot isn’t actively trying to make Stretch more affordable, and Edsinger suggests that the next iteration of the robot will be more focused on that. But—and this is super important—Kemp tells us that Stretch has been, is, and will continue to be sustainable for Hello Robot: “We actually charge what we should be charging to be able to have a sustainable business.” In other words, Hello Robot is not relying on some nebulous scale-defined future to transition into a business model that can develop, sell, and support robots. They can do that right now while keeping the lights on. “Our sales have enough margin to make our business work,” says Kemp. “That’s part of our discipline.”

Stretch 3 is available now for $24,950, which is just about the same as the cost of Stretch 2 with the optional add-ons included. There are lots and lots of other new features that we couldn’t squeeze into this article, including FCC certification, a more durable arm, and off-board GPU support. You’ll find a handy list of all the upgrades here.


A lot has happened in robotics over the last year. Everyone is wondering how AI will transform robotics, and everyone else is wondering whether humanoids are going to blow it or not, and the rest of us are busy trying not to get completely run over as things shake out however they’re going to shake out.

Meanwhile, over at Hello Robot, they’ve been focused on making their Stretch robot do useful things while also being affordable and reliable and affordable and expandable and affordable and community-friendly and affordable. Which are some really hard and important problems that can sometimes get overwhelmed by flashier things.

Today, Hello Robot is announcing Stretch 3, which provides a suite of upgrades to what they (quite accurately) call “the world’s only lightweight, capable, developer-friendly mobile manipulator.” And impressively, they’ve managed to do it without forgetting about that whole “affordable” part.

Hello Robot

Stretch 3 looks about the same as the previous versions, but there are important upgrades that are worth highlighting. The most impactful: Stretch 3 now comes with the dexterous wrist kit that used to be an add-on, and it now also includes an Intel Realsense D405 camera mounted right behind the gripper, which is a huge help for both autonomy and remote teleoperation—a useful new feature shipping with Stretch 3 that’s based on research out of Maya Cakmak’s lab at the University of Washington, in Seattle. This is an example of turning innovation from the community of Stretch users into product features, a product-development approach that seems to be working well for Hello Robot.

“We’ve really been learning from our community,” says Hello Robot cofounder and CEO Aaron Edsinger. “In the past year, we’ve seen a real uptick in publications, and it feels like we’re getting to this critical-mass moment with Stretch. So with Stretch 3, it’s about implementing features that our community has been asking us for.”

“When we launched, we didn’t have a dexterous wrist at the end as standard, because we were trying to start with truly the minimum viable product,” says Hello Robot cofounder and CTO Charlie Kemp. “And what we found is that almost every order was adding the dexterous wrist, and by actually having it come in standard, we’ve been able to devote more attention to it and make it a much more robust and capable system.”

Kemp says that having Stretch do everything right out of the box (with Hello Robot support) makes a big difference for their research customers. “Making it easier for people to try things—we’ve learned to really value that, because the more steps that people have to go through to experience it, the less likely they are to build on it.” In a research context, this is important because what you’re really talking about is time: The more time people spend just trying to make the robot function, the less time they’ll spend getting the robot to do useful things.

Hello Robot

At this point, you may be thinking of Stretch as a research platform. Or you may be thinking of Stretch as a robot for people with disabilities, if you read our November 2023 cover story about Stretch and Henry and Jane Evans. And the robot is definitely both of those things. But Hello Robot stresses that these specific markets are not their end goal—they see Stretch as a generalist mobile manipulator with a future in the home, as suggested by this Stretch 3 promo video:

Hello Robot

Dishes, laundry, bubble cannons: All of these are critical to the functionality of any normal household. “Stretch is an inclusive robot,” says Kemp. “It’s not just for older adults or people with disabilities. We want a robot that can be beneficial for everyone. Our vision, and what we believe will really happen, whether it’s us or someone else, is that there is going to be a versatile, general-purpose home robot. Right now, clearly, our market is not yet consumers in the home. But that’s where we want to go.”

Robots in the home have been promised for decades, and with the notable exception of the Roomba, there has not been a lot of success. The idea of a robot that could handle dishes or laundry is tempting, but is it near-term or medium-term realistic? Edsinger, who has been at this whole robots thing for a very long time, is an optimist about this, and about the role that Stretch will play. “There are so many places where you can see the progress happening—in sensing, in manipulation,” Edsinger says. “I can imagine those things coming together now in a way that I could not have 5 to 10 years ago, when it seemed so incredibly hard.”

“We’re very pragmatic about what is possible. And I think that we do believe that things are changing faster than we anticipated—10 years ago, I had a pretty clear linear path in mind for robotics, but it’s hard to really imagine where we’ll be in terms of robot capabilities 10 years from now.” —Aaron Edsinger, Hello Robot

I’d say that it’s still incredibly hard, but Edsinger is right that a lot of the pieces do seem to be coming together. Arguably, the hardware is the biggest challenge here, because working in a home puts heavy constraints on what kind of hardware you’re able to use. You’re not likely to see a humanoid in a home anytime soon, because they’d actually be dangerous, and even a quadruped is likely to be more trouble than it’s worth in a home environment. Hello Robot is conscious of this, and that’s been one of the main drivers of the design of Stretch.

“I think the portability of Stretch is really worth highlighting because there’s just so much value in that which is maybe not obvious,” Edsinger tells us. Being able to just pick up and move a mobile manipulator is not normal. Stretch’s weight (24.5 kilograms) is almost trivial to work with, in sharp contrast with virtually every other mobile robot with an arm: Stretch fits into places that humans fit into, and manages to have a similar workspace as well, and its bottom-heavy design makes it safe for humans to be around. It can’t climb stairs, but it can be carried upstairs, which is a bigger deal than it may seem. It’ll fit in the back of a car, too. Stretch is built to explore the world—not just some facsimile of the world in a research lab.

NYU students have been taking Stretch into tens of homes around New York,” says Edsinger. “They carried one up a four-story walk-up. This enables real in-home data collection. And this is where home robots will start to happen—when you can have hundreds of these out there in homes collecting data for machine learning.”

“That’s where the opportunity is,” adds Kemp. “It’s that engagement with the world about where to apply the technology beneficially. And if you’re in a lab, you’re not going to find it.”

We’ve seen some compelling examples of this recently, with Mobile ALOHA. These are robots learning to be autonomous by having humans teleoperate them through common household skills. But the system isn’t particularly portable, and it costs nearly US $32,000 in parts alone. Don’t get me wrong: I love the research. It’s just going to be difficult to scale, and in order to collect enough data to effectively tackle the world, scale is critical. Stretch is much easier to scale, because you can just straight up buy one.

Or two! You may have noticed that some of the Stretch 3 videos have two robots in them, collaborating with each other. This is not yet autonomous, but with two robots, a single human (or a pair of humans) can teleoperate them as if they were effectively a single two-armed robot:

Hello Robot

Essentially, what you’ve got here is a two-armed robot that (very intentionally) has nothing to do with humanoids. As Kemp explains: “We’re trying to help our community and the world see that there is a different path from the human model. We humans tend to think of the preexisting solution: People have two arms, so we think, well, I’m going to need to have two arms on my robot or it’s going to have all these issues.” Kemp points out that robots like Stretch have shown that really quite a lot of things can be done with only one arm, but two arms can still be helpful for a substantial subset of common tasks. “The challenge for us, which I had just never been able to find a solution for, was how you get two arms into a portable, compact, affordable lightweight mobile manipulator. You can’t!”

But with two Stretches, you have not only two arms but also two shoulders that you can put wherever you want. Washing a dish? You’ll probably want two arms close together for collaborative manipulation. Making a bed? Put the two arms far apart to handle both sides of a sheet at once. It’s a sort of distributed on-demand bimanual manipulation, which certainly adds a little bit of complexity but also solves a bunch of problems when it comes to practical in-home manipulation. Oh—and if those teleop tools look like modified kitchen tongs, that’s because they’re modified kitchen tongs.

Of course, buying two Stretch robots is twice as expensive as buying a single Stretch robot, and even though Stretch 3’s cost of just under $25,000 is very inexpensive for a mobile manipulator and very affordable in a research or education context, we’re still pretty far from something that most people would be able to afford for themselves. Hello Robot says that producing robots at scale is the answer here, which I’m sure is true, but it can be a difficult thing for a small company to achieve.

Moving slowly toward scale is at least partly intentional, Kemp tells us. “We’re still in the process of discovering Stretch’s true form—what the robot really should be. If we tried to scale to make lots and lots of robots at a much lower cost before we fundamentally understood what the needs and challenges were going to be, I think it would be a mistake. And there are many gravestones out there for various home-robotics companies, some of which I truly loved. We don’t want to become one of those.”

This is not to say that Hello Robot isn’t actively trying to make Stretch more affordable, and Edsinger suggests that the next iteration of the robot will be more focused on that. But—and this is super important—Kemp tells us that Stretch has been, is, and will continue to be sustainable for Hello Robot: “We actually charge what we should be charging to be able to have a sustainable business.” In other words, Hello Robot is not relying on some nebulous scale-defined future to transition into a business model that can develop, sell, and support robots. They can do that right now while keeping the lights on. “Our sales have enough margin to make our business work,” says Kemp. “That’s part of our discipline.”

Stretch 3 is available now for $24,950, which is just about the same as the cost of Stretch 2 with the optional add-ons included. There are lots and lots of other new features that we couldn’t squeeze into this article, including FCC certification, a more durable arm, and off-board GPU support. You’ll find a handy list of all the upgrades here.


Odorigui is a type of Japanese cuisine in which people consume live seafood while it’s still moving, making movement part of the experience. You may have some feelings about this (I definitely do), but from a research perspective, getting into what those feelings are and what they mean isn’t really practical. To do so in a controlled way would be both morally and technically complicated, which is why Japanese researchers have started developing robots that can be eaten as they move, wriggling around in your mouth as you chomp down on them. Welcome to HERI: Human-Edible Robot Interaction.

That happy little robot that got its head ripped off by a hungry human (who, we have to say, was exceptionally polite about it) is made primarily of gelatin, along with sugar and apple juice for taste. After all the ingredients were mixed, it was poured into a mold and refrigerated for 12 hours to set, with the resulting texture ending up like a chewy gummy candy. The mold incorporated a couple of air chambers into the structure of the robot, which were hooked up to pneumatics that got the robot to wiggle back and forth.

Sixteen students at Osaka University got the chance to eat one of these wiggly little robots. The process was to put your mouth around the robot, let the robot move around in there for 10 seconds for the full experience, and then bite it off, chew, and swallow. Japanese people were chosen partly because this research was done in Japan, but also because, according to the paper, “of the cultural influences on the use of onomatopoeic terms.” In Japanese, there are terms that are useful in communicating specific kinds of textures that can’t easily be quantified.

The participants were asked a series of questions about their experience, including some heavy ones:

  • Did you think what you just ate had animateness?
  • Did you feel an emotion in what you just ate?
  • Did you think what you just ate had intelligence?
  • Did you feel guilty about what you just ate?

Oof.

Compared to a control group of students who ate the robot when it was not moving, the students who ate the moving robot were more likely to interpret it as having a “munya-munya” or “mumbly” texture, showing that movement can influence the eating experience. Analysis of question responses showed that the moving robot also caused people to perceive it as emotive and intelligent, and caused more feelings of guilt when it was consumed. The paper summarizes it pretty well: “In the stationary condition, participants perceived the robot as ‘food,’ whereas in the movement condition, they perceived it as a ‘creature.’”

The good news here is that since these robots are more like living things than non-robots, they could potentially stand in for eating live critters in a research context, say the researchers: “The utilization of edible robots in this study enabled us to examine the effects of subtle movement variations in human eating behavior under controlled conditions, a task that would be challenging to accomplish with real organisms.” There’s still more work to do to make the robots more like specific living things, but that’s the plan going forward:

Our proposed edible robot design does not specifically mimic any particular biological form. To address these limitations, we will focus on the field by designing edible robots that imitate forms relevant to ongoing discussions on food shortages and cultural delicacies. Specifically, in future studies, we will emulate creatures consumed in contexts such as insect-based diets, which are being considered as a solution to food scarcity issues, and traditional Japanese dishes like “Odorigui” or “Ikizukuri (live fish sashimi).” These imitations are expected to provide deep insights into the psychological and cognitive responses elicited when consuming moving robots, merging technology with necessities and culinary traditions.

Exploring the eating experience of a pneumatically-driven edible robot: Perception, taste, and texture, by Yoshihiro NakataI, Midori Ban, Ren Yamaki, Kazuya Horibe, Hideyuki Takahashi, and Hiroshi Ishiguro from The University of Electro-Communications and Osaka University, is published in PLOS One.



Odorigui is a type of Japanese cuisine in which people consume live seafood while it’s still moving, making movement part of the experience. You may have some feelings about this (I definitely do), but from a research perspective, getting into what those feelings are and what they mean isn’t really practical. To do so in a controlled way would be both morally and technically complicated, which is why Japanese researchers have started developing robots that can be eaten as they move, wriggling around in your mouth as you chomp down on them. Welcome to HERI: Human-Edible Robot Interaction.

That happy little robot that got its head ripped off by a hungry human (who, we have to say, was exceptionally polite about it) is made primarily of gelatin, along with sugar and apple juice for taste. After all the ingredients were mixed, it was poured into a mold and refrigerated for 12 hours to set, with the resulting texture ending up like a chewy gummy candy. The mold incorporated a couple of air chambers into the structure of the robot, which were hooked up to pneumatics that got the robot to wiggle back and forth.

Sixteen students at Osaka University got the chance to eat one of these wiggly little robots. The process was to put your mouth around the robot, let the robot move around in there for 10 seconds for the full experience, and then bite it off, chew, and swallow. Japanese people were chosen partly because this research was done in Japan, but also because, according to the paper, “of the cultural influences on the use of onomatopoeic terms.” In Japanese, there are terms that are useful in communicating specific kinds of textures that can’t easily be quantified.

The participants were asked a series of questions about their experience, including some heavy ones:

  • Did you think what you just ate had animateness?
  • Did you feel an emotion in what you just ate?
  • Did you think what you just ate had intelligence?
  • Did you feel guilty about what you just ate?

Oof.

Compared to a control group of students who ate the robot when it was not moving, the students who ate the moving robot were more likely to interpret it as having a “munya-munya” or “mumbly” texture, showing that movement can influence the eating experience. Analysis of question responses showed that the moving robot also caused people to perceive it as emotive and intelligent, and caused more feelings of guilt when it was consumed. The paper summarizes it pretty well: “In the stationary condition, participants perceived the robot as ‘food,’ whereas in the movement condition, they perceived it as a ‘creature.’”

The good news here is that since these robots are more like living things than non-robots, they could potentially stand in for eating live critters in a research context, say the researchers: “The utilization of edible robots in this study enabled us to examine the effects of subtle movement variations in human eating behavior under controlled conditions, a task that would be challenging to accomplish with real organisms.” There’s still more work to do to make the robots more like specific living things, but that’s the plan going forward:

Our proposed edible robot design does not specifically mimic any particular biological form. To address these limitations, we will focus on the field by designing edible robots that imitate forms relevant to ongoing discussions on food shortages and cultural delicacies. Specifically, in future studies, we will emulate creatures consumed in contexts such as insect-based diets, which are being considered as a solution to food scarcity issues, and traditional Japanese dishes like “Odorigui” or “Ikizukuri (live fish sashimi).” These imitations are expected to provide deep insights into the psychological and cognitive responses elicited when consuming moving robots, merging technology with necessities and culinary traditions.

Exploring the eating experience of a pneumatically-driven edible robot: Perception, taste, and texture, by Yoshihiro NakataI, Midori Ban, Ren Yamaki, Kazuya Horibe, Hideyuki Takahashi, and Hiroshi Ishiguro from The University of Electro-Communications and Osaka University, is published in PLOS One.



Just last month, Oslo, Norway-based 1X (formerly Halodi Robotics) announced a massive $100 million Series B, and clearly they’ve been putting the work in. A new video posted last week shows a [insert collective noun for humanoid robots here] of EVE android-ish mobile manipulators doing a wide variety of tasks leveraging end-to-end neural networks (pixels to actions). And best of all, the video seems to be more or less an honest one: a single take, at (appropriately) 1X speed, and full autonomy. But we still had questions! And 1X has answers.

If, like me, you had some very important questions after watching this video, including whether that plant is actually dead and the fate of the weighted companion cube, you’ll want to read this Q&A with Eric Jang, Vice President of Artificial Intelligence at 1X.

IEEE Spectrum: How many takes did it take to get this take?

Eric Jang: About 10 takes that lasted more than a minute; this was our first time doing a video like this, so it was more about learning how to coordinate the film crew and set up the shoot to look impressive.

Did you train your robots specifically on floppy things and transparent things?

Jang: Nope! We train our neural network to pick up all kinds of objects—both rigid and deformable and transparent things. Because we train manipulation end-to-end from pixels, picking up deformables and transparent objects is much easier than a classical grasping pipeline, where you have to figure out the exact geometry of what you are trying to grasp.

What keeps your robots from doing these tasks faster?

Jang: Our robots learn from demonstrations, so they go at exactly the same speed the human teleoperators demonstrate the task at. If we gathered demonstrations where we move faster, so would the robots.

How many weighted companion cubes were harmed in the making of this video?

Jang: At 1X, weighted companion cubes do not have rights.

That’s a very cool method for charging, but it seems a lot more complicated than some kind of drive-on interface directly with the base. Why use manipulation instead?

Jang: You’re right that this isn’t the simplest way to charge the robot, but if we are going to succeed at our mission to build generally capable and reliable robots that can manipulate all kinds of objects, our neural nets have to be able to do this task at the very least. Plus, it reduces costs quite a bit and simplifies the system!

What animal is that blue plush supposed to be?

Jang: It’s an obese shark, I think.

How many different robots are in this video?

Jang: 17? And more that are stationary.

How do you tell the robots apart?

Jang: They have little numbers printed on the base.

Is that plant dead?

Jang: Yes, we put it there because no CGI / 3D rendered video would ever go through the trouble of adding a dead plant.

What sort of existential crisis is the robot at the window having?

Jang: It was supposed to be opening and closing the window repeatedly (good for testing statistical significance).

If one of the robots was actually a human in a helmet and a suit holding grippers and standing on a mobile base, would I be able to tell?

Jang: I was super flattered by this comment on the Youtube video:

But if you look at the area where the upper arm tapers at the shoulder, it’s too thin for a human to fit inside while still having such broad shoulders:

Why are your robots so happy all the time? Are you planning to do more complex HRI stuff with their faces?

Jang: Yes, more complex HRI stuff is in the pipeline!

Are your robots able to autonomously collaborate with each other?

Jang: Stay tuned!

Is the skew tetromino the most difficult tetromino for robotic manipulation?

Jang: Good catch! Yes, the green one is the worst of them all because there are many valid ways to pinch it with the gripper and lift it up. In robotic learning, if there are multiple ways to pick something up, it can actually confuse the machine learning model. Kind of like asking a car to turn left and right at the same time to avoid a tree.

Everyone else’s robots are making coffee. Can your robots make coffee?

Jang: Yep! We were planning to throw in some coffee making on this video as an easter egg, but the coffee machine broke right before the film shoot and it turns out it’s impossible to get a Keurig K-Slim in Norway via next day shipping.

1X is currently hiring both AI researchers (imitation learning, reinforcement learning, large-scale training, etc) and android operators (!) which actually sounds like a super fun and interesting job. More here.



Just last month, Oslo, Norway-based 1X (formerly Halodi Robotics) announced a massive $100 million Series B, and clearly they’ve been putting the work in. A new video posted last week shows a [insert collective noun for humanoid robots here] of EVE android-ish mobile manipulators doing a wide variety of tasks leveraging end-to-end neural networks (pixels to actions). And best of all, the video seems to be more or less an honest one: a single take, at (appropriately) 1X speed, and full autonomy. But we still had questions! And 1X has answers.

If, like me, you had some very important questions after watching this video, including whether that plant is actually dead and the fate of the weighted companion cube, you’ll want to read this Q&A with Eric Jang, Vice President of Artificial Intelligence at 1X.

IEEE Spectrum: How many takes did it take to get this take?

Eric Jang: About 10 takes that lasted more than a minute; this was our first time doing a video like this, so it was more about learning how to coordinate the film crew and set up the shoot to look impressive.

Did you train your robots specifically on floppy things and transparent things?

Jang: Nope! We train our neural network to pick up all kinds of objects—both rigid and deformable and transparent things. Because we train manipulation end-to-end from pixels, picking up deformables and transparent objects is much easier than a classical grasping pipeline, where you have to figure out the exact geometry of what you are trying to grasp.

What keeps your robots from doing these tasks faster?

Jang: Our robots learn from demonstrations, so they go at exactly the same speed the human teleoperators demonstrate the task at. If we gathered demonstrations where we move faster, so would the robots.

How many weighted companion cubes were harmed in the making of this video?

Jang: At 1X, weighted companion cubes do not have rights.

That’s a very cool method for charging, but it seems a lot more complicated than some kind of drive-on interface directly with the base. Why use manipulation instead?

Jang: You’re right that this isn’t the simplest way to charge the robot, but if we are going to succeed at our mission to build generally capable and reliable robots that can manipulate all kinds of objects, our neural nets have to be able to do this task at the very least. Plus, it reduces costs quite a bit and simplifies the system!

What animal is that blue plush supposed to be?

Jang: It’s an obese shark, I think.

How many different robots are in this video?

Jang: 17? And more that are stationary.

How do you tell the robots apart?

Jang: They have little numbers printed on the base.

Is that plant dead?

Jang: Yes, we put it there because no CGI / 3D rendered video would ever go through the trouble of adding a dead plant.

What sort of existential crisis is the robot at the window having?

Jang: It was supposed to be opening and closing the window repeatedly (good for testing statistical significance).

If one of the robots was actually a human in a helmet and a suit holding grippers and standing on a mobile base, would I be able to tell?

Jang: I was super flattered by this comment on the Youtube video:

But if you look at the area where the upper arm tapers at the shoulder, it’s too thin for a human to fit inside while still having such broad shoulders:

Why are your robots so happy all the time? Are you planning to do more complex HRI stuff with their faces?

Jang: Yes, more complex HRI stuff is in the pipeline!

Are your robots able to autonomously collaborate with each other?

Jang: Stay tuned!

Is the skew tetromino the most difficult tetromino for robotic manipulation?

Jang: Good catch! Yes, the green one is the worst of them all because there are many valid ways to pinch it with the gripper and lift it up. In robotic learning, if there are multiple ways to pick something up, it can actually confuse the machine learning model. Kind of like asking a car to turn left and right at the same time to avoid a tree.

Everyone else’s robots are making coffee. Can your robots make coffee?

Jang: Yep! We were planning to throw in some coffee making on this video as an easter egg, but the coffee machine broke right before the film shoot and it turns out it’s impossible to get a Keurig K-Slim in Norway via next day shipping.

1X is currently hiring both AI researchers (imitation learning, reinforcement learning, large-scale training, etc) and android operators (!) which actually sounds like a super fun and interesting job. More here.



This is a guest post. The views expressed here are solely those of the author and do not represent positions of IEEE Spectrum or the IEEE.

If Disney’s history of storytelling has taught us anything, it’s to never underestimate the power of a great sidekick. Even though sidekicks aren’t the stars of the show, they provide life and energy and move the story along in important ways. It’s hard to imagine Aladdin without the Genie, or Peter Pan without Tinker Bell.

In robotics, however, solo acts proliferate. Even when multiple robots are used, they usually act in parallel. One key reason for this is that most robots are designed in ways that make direct collaboration with other robots difficult. Stiff, strong robots are more repeatable and easier to control, but those designs have very little forgiveness for the imperfections and mismatches that are inherent in coming into contact with another robot.

Having robots work together–especially if they have complementary skill sets–can open up some exciting opportunities, especially in the entertainment robotics space. At Walt Disney Imagineering, our research and development teams have been working on this idea of collaboration between robots, and we were able to show off the result of one such collaboration in Shanghai this week, when a little furry character interrupted the opening moments for the first-ever Zootopia land.

Our newest robotic character, Duke Weaselton, rolled onstage at the Shanghai Disney Resort for the first time last December, pushing a purple kiosk and blasting pop music. As seen in the video below, the audience got a kick out of watching him hop up on top of the kiosk and try to negotiate with the Chairman of Disney Experiences, Josh D’Amaro, for a new job. And of course, some new perks. After a few moments of wheeling and dealing, Duke gets gently escorted offstage by team members Richard Landon and Louis Lambie.

What might not be obvious at first is that the moment you just saw was enabled not by one robot, but by two. Duke Weaselton is the star of the show, but his dynamic motion wouldn’t be possible without the kiosk, which is its own independent, actuated robot. While these two robots are very different, by working together as one system, they’re able to do things that neither could do alone.

The character and the kiosk bring two very different kinds of motion together, and create something more than the sum of their parts in the process. The character is an expressive, bipedal robot with an exaggerated, animated motion style. It looks fantastic, but it’s not optimized for robust, reliable locomotion. The kiosk, meanwhile, is a simple wheeled system that behaves in a highly predictable way. While that’s great for reliability, it means that by itself it’s not likely to surprise you. But when we combine these two robots, we get the best of both worlds. The character robot can bring a zany, unrestrained energy and excitement as it bounces up, over, and alongside the kiosk, while the kiosk itself ensures that both robots reliably get to wherever they are going.

Harout Jarchafjian, Sophie Bowe, Tony Dohi, Bill West, Marcela de los Rios, Bob Michel, and Morgan Pope.Morgan Pope

The collaboration between the two robots is enabled by designing them to be robust and flexible, and with motions that can tolerate a large amount of uncertainty while still delivering a compelling show. This is a direct result from lessons learned from an earlier robot, one that tumbled across the stage at SXSW earlier this year. Our basic insight is that a small, lightweight robot can be surprisingly tough, and that this toughness enables new levels of creative freedom in the design and execution of a show.

This level of robustness also makes collaboration between robots easier. Because the character robot is tough and because there is some flexibility built into its motors and joints, small errors in placement and pose don’t create big problems like they might for a more conventional robot. The character can lean on the motorized kiosk to create the illusion that it is pushing it across the stage. The kiosk then uses a winch to hoist the character onto a platform, where electromagnets help stabilize its feet. Essentially, the kiosk is compensating for the fact that Duke himself can’t climb, and might be a little wobbly without having his feet secured. The overall result is a free-ranging bipedal robot that moves in a way that feels natural and engaging, but that doesn’t require especially complicated controls or highly precise mechanical design. Here’s a behind-the-scenes look at our development of these systems:

Disney Imagineering

To program Duke’s motions, our team uses an animation pipeline that was originally developed for the SXSW demo, where a designer can pose the robot by hand to create new motions. We have since developed an interface which can also take motions from conventional animation software tools. Motions can then be adjusted to adapt to the real physical constraints of the robots, and that information can be sent back to the animation tool. As animations are developed, it’s critical to retain a tight synchronization between the kiosk and the character. The system is designed so that the motion of both robots is always coordinated, while simultaneously supporting the ability to flexibly animate individual robots–or individual parts of the robot, like the mouth and eyes.

Over the past nine months, we explored a few different kinds of collaborative locomotion approaches. The GIFs below show some early attempts at riding a tricycle, skateboarding, and pushing a crate. In each case, the idea is for a robotic character to eventually collaborate with another robotic system that helps bring that character’s motions to life in a stable and repeatable way.

Disney hopes that their Judy Hopps robot will soon be able to use the help of a robotic tricycle, crate, or skateboard to enable new forms of locomotion.Morgan Pope

This demo with Duke Weaselton and his kiosk is just the beginning, says Principal R&D Imagineer Tony Dohi, who leads the project for us. “Ultimately, what we showed today is an important step towards a bigger vision. This project is laying the groundwork for robots that can interact with each other in surprising and emotionally satisfying ways. Today it’s a character and a kiosk, but moving forward we want to have multiple characters that can engage with each other and with our guests.”

Walt Disney Imagineering R&D is exploring a multi-pronged development strategy for our robotic characters. Engaging character demonstrations like Duke Weasleton focus on quickly prototyping complete experiences using immediately accessible techniques. In parallel, our research group is developing new technologies and capabilities that become the building blocks for both elevating existing experiences, and designing and delivering completely new shows. The robotics team led by Moritz Bächer shared one such building block–embodied in a highly expressive and stylized robotic walking character–at IROS in October. The capabilities demonstrated there can eventually be used to help robots like Duke Weaselton perform more flexibly, more reliably, and more spectacularly.

“Authentic character demonstrations are useful because they help inform what tools are the most valuable for us to develop,” explains Bächer. “In the end our goal is to create tools that enable our teams to produce and deliver these shows rapidly and efficiently.” This ties back to the fundamental technical idea behind the Duke Weaselton show moment–collaboration is key!



This is a guest post. The views expressed here are solely those of the author and do not represent positions of IEEE Spectrum or the IEEE.

If Disney’s history of storytelling has taught us anything, it’s to never underestimate the power of a great sidekick. Even though sidekicks aren’t the stars of the show, they provide life and energy and move the story along in important ways. It’s hard to imagine Aladdin without the Genie, or Peter Pan without Tinker Bell.

In robotics, however, solo acts proliferate. Even when multiple robots are used, they usually act in parallel. One key reason for this is that most robots are designed in ways that make direct collaboration with other robots difficult. Stiff, strong robots are more repeatable and easier to control, but those designs have very little forgiveness for the imperfections and mismatches that are inherent in coming into contact with another robot.

Having robots work together–especially if they have complementary skill sets–can open up some exciting opportunities, especially in the entertainment robotics space. At Walt Disney Imagineering, our research and development teams have been working on this idea of collaboration between robots, and we were able to show off the result of one such collaboration in Shanghai this week, when a little furry character interrupted the opening moments for the first-ever Zootopia land.

Our newest robotic character, Duke Weaselton, rolled onstage at the Shanghai Disney Resort for the first time last December, pushing a purple kiosk and blasting pop music. As seen in the video below, the audience got a kick out of watching him hop up on top of the kiosk and try to negotiate with the Chairman of Disney Experiences, Josh D’Amaro, for a new job. And of course, some new perks. After a few moments of wheeling and dealing, Duke gets gently escorted offstage by team members Richard Landon and Louis Lambie.

What might not be obvious at first is that the moment you just saw was enabled not by one robot, but by two. Duke Weaselton is the star of the show, but his dynamic motion wouldn’t be possible without the kiosk, which is its own independent, actuated robot. While these two robots are very different, by working together as one system, they’re able to do things that neither could do alone.

The character and the kiosk bring two very different kinds of motion together, and create something more than the sum of their parts in the process. The character is an expressive, bipedal robot with an exaggerated, animated motion style. It looks fantastic, but it’s not optimized for robust, reliable locomotion. The kiosk, meanwhile, is a simple wheeled system that behaves in a highly predictable way. While that’s great for reliability, it means that by itself it’s not likely to surprise you. But when we combine these two robots, we get the best of both worlds. The character robot can bring a zany, unrestrained energy and excitement as it bounces up, over, and alongside the kiosk, while the kiosk itself ensures that both robots reliably get to wherever they are going.

Harout Jarchafjian, Sophie Bowe, Tony Dohi, Bill West, Marcela de los Rios, Bob Michel, and Morgan Pope.Morgan Pope

The collaboration between the two robots is enabled by designing them to be robust and flexible, and with motions that can tolerate a large amount of uncertainty while still delivering a compelling show. This is a direct result from lessons learned from an earlier robot, one that tumbled across the stage at SXSW earlier this year. Our basic insight is that a small, lightweight robot can be surprisingly tough, and that this toughness enables new levels of creative freedom in the design and execution of a show.

This level of robustness also makes collaboration between robots easier. Because the character robot is tough and because there is some flexibility built into its motors and joints, small errors in placement and pose don’t create big problems like they might for a more conventional robot. The character can lean on the motorized kiosk to create the illusion that it is pushing it across the stage. The kiosk then uses a winch to hoist the character onto a platform, where electromagnets help stabilize its feet. Essentially, the kiosk is compensating for the fact that Duke himself can’t climb, and might be a little wobbly without having his feet secured. The overall result is a free-ranging bipedal robot that moves in a way that feels natural and engaging, but that doesn’t require especially complicated controls or highly precise mechanical design. Here’s a behind-the-scenes look at our development of these systems:

Disney Imagineering

To program Duke’s motions, our team uses an animation pipeline that was originally developed for the SXSW demo, where a designer can pose the robot by hand to create new motions. We have since developed an interface which can also take motions from conventional animation software tools. Motions can then be adjusted to adapt to the real physical constraints of the robots, and that information can be sent back to the animation tool. As animations are developed, it’s critical to retain a tight synchronization between the kiosk and the character. The system is designed so that the motion of both robots is always coordinated, while simultaneously supporting the ability to flexibly animate individual robots–or individual parts of the robot, like the mouth and eyes.

Over the past nine months, we explored a few different kinds of collaborative locomotion approaches. The GIFs below show some early attempts at riding a tricycle, skateboarding, and pushing a crate. In each case, the idea is for a robotic character to eventually collaborate with another robotic system that helps bring that character’s motions to life in a stable and repeatable way.

Disney hopes that their Judy Hopps robot will soon be able to use the help of a robotic tricycle, crate, or skateboard to enable new forms of locomotion.Morgan Pope

This demo with Duke Weaselton and his kiosk is just the beginning, says Principal R&D Imagineer Tony Dohi, who leads the project for us. “Ultimately, what we showed today is an important step towards a bigger vision. This project is laying the groundwork for robots that can interact with each other in surprising and emotionally satisfying ways. Today it’s a character and a kiosk, but moving forward we want to have multiple characters that can engage with each other and with our guests.”

Walt Disney Imagineering R&D is exploring a multi-pronged development strategy for our robotic characters. Engaging character demonstrations like Duke Weasleton focus on quickly prototyping complete experiences using immediately accessible techniques. In parallel, our research group is developing new technologies and capabilities that become the building blocks for both elevating existing experiences, and designing and delivering completely new shows. The robotics team led by Moritz Bächer shared one such building block–embodied in a highly expressive and stylized robotic walking character–at IROS in October. The capabilities demonstrated there can eventually be used to help robots like Duke Weaselton perform more flexibly, more reliably, and more spectacularly.

“Authentic character demonstrations are useful because they help inform what tools are the most valuable for us to develop,” explains Bächer. “In the end our goal is to create tools that enable our teams to produce and deliver these shows rapidly and efficiently.” This ties back to the fundamental technical idea behind the Duke Weaselton show moment–collaboration is key!



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

Cybathlon Challenges: 02 February 2024, ZURICHHRI 2024: 11–15 March 2024, BOULDER, COLO.Eurobot Open 2024: 8–11 May 2024, LA ROCHE-SUR-YON, FRANCEICRA 2024: 13–17 May 2024, YOKOHAMA, JAPAN

Enjoy today’s videos!

In this video, we present Ringbot, a novel leg-wheel transformer robot incorporating a monocycle mechanism with legs. Ringbot aims to provide versatile mobility by replacing the driver and driving components of a conventional monocycle vehicle with legs mounted on compact driving modules inside the wheel.

[ Paper ] via [ KIMLAB ]

Making money with robots has always been a struggle, but I think ALOHA 2 has figured it out.

Seriously, though, that is some impressive manipulation capability. I don’t know what that freakish panda thing is, but getting a contact lens from the package onto its bizarre eyeball was some wild dexterity.

[ ALOHA 2 ]

Highlights from testing our new arms built by Boardwalk Robotics. Installed in October of 2023, these new arms are not just for boxing, and are provide much greater speed and power. This matches the mobility and manipulation goals we have for Nadia!

The least dramatic but possibly most important bit of that video is when Nadia uses her arms to help her balance against a wall, which is one of those things that humans do all the time without thinking about it. And we always appreciate being shown things that don’t go perfectly alongside things that do. The bit at the end there was Nadia not quite managing to do lateral arm raises. I can relate; that’s my reaction when I lift weights, too.

[ IHMC ]

Thanks, Robert!

The recent progress in commercial humanoids is just exhausting.

[ Unitree ]

We present an avatar system designed to facilitate the embodiment of humanoid robots by human operators, validated through iCub3, a humanoid developed at the Istituto Italiano di Tecnologia.

[ Science Robotics ]

Have you ever seen a robot skiing?! Ascento robot enjoying a day in the ski slopes of Davos.

[ Ascento ]

Can’t trip Atlas up! Our humanoid robot gets ready for real work combining strength, perception, and mobility.

Notable that Boston Dynamics is now saying that Atlas “gets ready for real work.” Wonder how much to read into that?

[ Boston Dynamics ]

You deserve to be free from endless chores! YOU! DESERVE! CHORE! FREEDOM!

Pretty sure this is teleoperated, so someone is still doing the chores, sadly.

[ MagicLab ]

Multimodal UAVs (Unmanned Aerial Vehicles) are rarely capable of more than two modalities, i.e., flying and walking or flying and perching. However, being able to fly, perch, and walk could further improve their usefulness by expanding their operating envelope. For instance, an aerial robot could fly a long distance, perch in a high place to survey the surroundings, then walk to avoid obstacles that could potentially inhibit flight. Birds are capable of these three tasks, and so offer a practical example of how a robot might be developed to do the same.

[ Paper ] via [ EPFL LIS ]

Nissan announces the concept model of “Iruyo”, a robot that supports babysitting while driving. Ilyo relieves the anxiety of the mother, father, and baby in the driver’s seat. We support safe and secure driving for parents and children. Nissan and Akachan Honpo are working on a project to make life better with cars and babies. Iruyo was born out of the voices of mothers and fathers who said, “I can’t hold my baby while driving alone.”

[ Nissan ]

Building 937 houses the coolest robots at CERN. This is where the action happens to build and program robots that can tackle the unconventional challenges presented by the Laboratory’s unique facilities. Recently, a new type of robot called CERNquadbot has entered CERN’s robot pool and successfully completed its first radiation protection test in the North Area.

[ CERN ]

Congrats to Starship, the OG robotic delivery service, on their $90m raise.

[ Starship ]

By blending 2D images with foundation models to build 3D feature fields, a new MIT method helps robots understand and manipulate nearby objects with open-ended language prompts.

[ GitHub ] via [ MIT ]

This is one of those things that’s far more difficult than it might look.

[ ROAM Lab ]

Our current care system does not scale and our populations are ageing fast. Robodies are multipliers for care staff, allowing them to work together with local helpers to provide protection and assistance around the clock while maintaining personal contact with people in the community.

[ DEVANTHRO ]

It’s the world’s smallest humanoid robot, until someone comes out with slightly smaller servos!

[ Guinness ]

Deep Robotics wishes you a happy year of the dragon!

[ Deep Robotics ]

SEAS researchers are helping develop resilient and autonomous deep space and extraterrestrial habitations by developing technologies to let autonomous robots repair or replace damaged components in a habitat. The research is part of the Resilient ExtraTerrestrial Habitats institute (RETHi) is led by Purdue University, in partnership with SEAS, the University of Connecticut and the University of Texas at San Antonio. Its goal is to “design and operate resilient deep space habitats that can adapt, absorb and rapidly recover from expected and unexpected disruptions.”

[ Harvard ]

Find out how a bold vision became a success story! The DLR Institute of Robotics and Mechatronics has been researching robotic arms since the 1990s - originally for use in space. It was a long and ambitious journey before these lightweight robotic arms could be used on earth and finally in operating theaters, a journey that required concentrated robotics expertise, interdisciplinary cooperation and ultimately a successful technology transfer.]

[ DLR MIRO ]

Robotics is changing the world, driven by focused teams of diverse experts. Willow Garage operated with the mantra “Impact first, return on capital second” and through ROS and the PR2 had enormous impact. Autonomous mobile robots are finally being accepted in the service industry, and Savioke (now Relay Robotics) was created to drive that impact. This talk will trace the evolution of Relay robots and their deployment in hotels, hospitals and other service industries, starting with roots at Willow Garage. As robotics technology is poised for the next round of advances, how do we create and maintain the organizations that continue to drive progress?

[ Northwestern ]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

Cybathlon Challenges: 02 February 2024, ZURICHHRI 2024: 11–15 March 2024, BOULDER, COLO.Eurobot Open 2024: 8–11 May 2024, LA ROCHE-SUR-YON, FRANCEICRA 2024: 13–17 May 2024, YOKOHAMA, JAPAN

Enjoy today’s videos!

In this video, we present Ringbot, a novel leg-wheel transformer robot incorporating a monocycle mechanism with legs. Ringbot aims to provide versatile mobility by replacing the driver and driving components of a conventional monocycle vehicle with legs mounted on compact driving modules inside the wheel.

[ Paper ] via [ KIMLAB ]

Making money with robots has always been a struggle, but I think ALOHA 2 has figured it out.

Seriously, though, that is some impressive manipulation capability. I don’t know what that freakish panda thing is, but getting a contact lens from the package onto its bizarre eyeball was some wild dexterity.

[ ALOHA 2 ]

Highlights from testing our new arms built by Boardwalk Robotics. Installed in October of 2023, these new arms are not just for boxing, and are provide much greater speed and power. This matches the mobility and manipulation goals we have for Nadia!

The least dramatic but possibly most important bit of that video is when Nadia uses her arms to help her balance against a wall, which is one of those things that humans do all the time without thinking about it. And we always appreciate being shown things that don’t go perfectly alongside things that do. The bit at the end there was Nadia not quite managing to do lateral arm raises. I can relate; that’s my reaction when I lift weights, too.

[ IHMC ]

Thanks, Robert!

The recent progress in commercial humanoids is just exhausting.

[ Unitree ]

We present an avatar system designed to facilitate the embodiment of humanoid robots by human operators, validated through iCub3, a humanoid developed at the Istituto Italiano di Tecnologia.

[ Science Robotics ]

Have you ever seen a robot skiing?! Ascento robot enjoying a day in the ski slopes of Davos.

[ Ascento ]

Can’t trip Atlas up! Our humanoid robot gets ready for real work combining strength, perception, and mobility.

Notable that Boston Dynamics is now saying that Atlas “gets ready for real work.” Wonder how much to read into that?

[ Boston Dynamics ]

You deserve to be free from endless chores! YOU! DESERVE! CHORE! FREEDOM!

Pretty sure this is teleoperated, so someone is still doing the chores, sadly.

[ MagicLab ]

Multimodal UAVs (Unmanned Aerial Vehicles) are rarely capable of more than two modalities, i.e., flying and walking or flying and perching. However, being able to fly, perch, and walk could further improve their usefulness by expanding their operating envelope. For instance, an aerial robot could fly a long distance, perch in a high place to survey the surroundings, then walk to avoid obstacles that could potentially inhibit flight. Birds are capable of these three tasks, and so offer a practical example of how a robot might be developed to do the same.

[ Paper ] via [ EPFL LIS ]

Nissan announces the concept model of “Iruyo”, a robot that supports babysitting while driving. Ilyo relieves the anxiety of the mother, father, and baby in the driver’s seat. We support safe and secure driving for parents and children. Nissan and Akachan Honpo are working on a project to make life better with cars and babies. Iruyo was born out of the voices of mothers and fathers who said, “I can’t hold my baby while driving alone.”

[ Nissan ]

Building 937 houses the coolest robots at CERN. This is where the action happens to build and program robots that can tackle the unconventional challenges presented by the Laboratory’s unique facilities. Recently, a new type of robot called CERNquadbot has entered CERN’s robot pool and successfully completed its first radiation protection test in the North Area.

[ CERN ]

Congrats to Starship, the OG robotic delivery service, on their $90m raise.

[ Starship ]

By blending 2D images with foundation models to build 3D feature fields, a new MIT method helps robots understand and manipulate nearby objects with open-ended language prompts.

[ GitHub ] via [ MIT ]

This is one of those things that’s far more difficult than it might look.

[ ROAM Lab ]

Our current care system does not scale and our populations are ageing fast. Robodies are multipliers for care staff, allowing them to work together with local helpers to provide protection and assistance around the clock while maintaining personal contact with people in the community.

[ DEVANTHRO ]

It’s the world’s smallest humanoid robot, until someone comes out with slightly smaller servos!

[ Guinness ]

Deep Robotics wishes you a happy year of the dragon!

[ Deep Robotics ]

SEAS researchers are helping develop resilient and autonomous deep space and extraterrestrial habitations by developing technologies to let autonomous robots repair or replace damaged components in a habitat. The research is part of the Resilient ExtraTerrestrial Habitats institute (RETHi) is led by Purdue University, in partnership with SEAS, the University of Connecticut and the University of Texas at San Antonio. Its goal is to “design and operate resilient deep space habitats that can adapt, absorb and rapidly recover from expected and unexpected disruptions.”

[ Harvard ]

Find out how a bold vision became a success story! The DLR Institute of Robotics and Mechatronics has been researching robotic arms since the 1990s - originally for use in space. It was a long and ambitious journey before these lightweight robotic arms could be used on earth and finally in operating theaters, a journey that required concentrated robotics expertise, interdisciplinary cooperation and ultimately a successful technology transfer.]

[ DLR MIRO ]

Robotics is changing the world, driven by focused teams of diverse experts. Willow Garage operated with the mantra “Impact first, return on capital second” and through ROS and the PR2 had enormous impact. Autonomous mobile robots are finally being accepted in the service industry, and Savioke (now Relay Robotics) was created to drive that impact. This talk will trace the evolution of Relay robots and their deployment in hotels, hospitals and other service industries, starting with roots at Willow Garage. As robotics technology is poised for the next round of advances, how do we create and maintain the organizations that continue to drive progress?

[ Northwestern ]

Pages