Feed aggregator



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

HRI 2024: 11–15 March 2024, BOULDER, COLO.Eurobot Open 2024: 8–11 May 2024, LA ROCHE-SUR-YON, FRANCEICRA 2024: 13–17 May 2024, YOKOHAMA, JAPANRoboCup 2024: 17–22 July 2024, EINDHOVEN, NETHERLANDS

Enjoy today’s videos!

We present Human to Humanoid (H2O), a reinforcement learning (RL) based framework that enables real-time, whole-body teleoperation of a full-sized humanoid robot with only an RGB camera. We successfully achieve teleoperation of dynamic, whole-body motions in real-world scenarios, including walking, back jumping, kicking, turning, waving, pushing, boxing, etc. To the best of our knowledge, this is the first demonstration to achieve learning-based, real-time, whole-body humanoid teleoperation.

[ CMU ]

Legged robots have the potential to traverse complex terrain and access confined spaces beyond the reach of traditional platforms thanks to their ability to carefully select footholds and flexibly adapt their body posture while walking. However, robust deployment in real-world applications is still an open challenge. In this paper, we present a method for legged locomotion control using reinforcement learning and 3D volumetric representations to enable robust and versatile locomotion in confined and unstructured environments.

[ Takahiro Miki ]

Sure, 3.3 meters per second is fast for a humanoid, but I’m more impressed by the spinning around while walking downstairs.

[ Unitree ]

Improving the safety of collaborative manipulators necessitates the reduction of inertia in the moving part. We introduce a novel approach in the form of a passive, 3D wire aligner, serving as a lightweight and low-friction power transmission mechanism, thus achieving the desired low inertia in the manipulator’s operation.

[ SAQIEL ]

Thanks, Temma!

Robot Era just launched Humanoid-Gym, an open-source reinforcement learning framework for bipedal humanoids. As you can see from the video, RL algorithms have given the robot, called Xiao Xing, or XBot, the ability to climb up and down haphazardly stacked boxes with relative stability and ease.

[ Robot Era ]

“Impact-Aware Bimanual Catching of Large-Momentum Objects.” Need I say more?

[ SLMC ]

More than 80% of stroke survivors experience walking difficulty, significantly impacting their daily lives, independence, and overall quality of life. Now, new research from the University of Massachusetts Amherst pushes forward the bounds of stroke recovery with a unique robotic hip exoskeleton, designed as a training tool to improve walking function. This invites the possibility of new therapies that are more accessible and easier to translate from practice to daily life, compared to current rehabilitation methods.

[ UMass Amherst ]

Thanks, Julia!

The manipulation here is pretty impressive, but it’s hard to know how impressive without also knowing how much the video was sped up.

[ Somatic ]

DJI drones work to make the world a better place and one of the ways that we do this is through conservation work. We partnered with Halo Robotics and the OFI Orangutan Foundation International to showcase just how these drones can make an impact.

[ DJI ]

The aim of the test is to demonstrate the removal and replacement of satellite modules into a 27U CubeSat format using augmented reality control of a robot. In this use case, the “client” satellite is being upgraded and refueled using modular componentry. The robot will then remove the failed computer module and place it in a fixture. It will then do the same with the propellant tank. The robot will then place these correctly back into the satellite.

[ Extend Robotics ]

This video features some of the highlights and favorite moments from the CYBATHLON Challenges 2024 that took place on 2 February, showing so many diverse types of assistive technology taking on discipline tasks and displaying pilots’ tenacity and determination. The Challenges saw new teams, new tasks, and new formats for many of the CYBATHLON disciplines.

[ Cybathlon ]

It’s been a long road to electrically powered robots.

[ ABB ]

Small drones for catastrophic wildfires (ones covering more than [40,470 hectares]) are like bringing a flashlight to light up a football field. This short video describes the major uses for drones of all sizes and why and when they are used, or why not.

[ CRASAR ]

It probably will not surprise you that there are a lot of robots involved in building Rivian trucks and vans.

[ Kawasaki Robotics ]

DARPA’s Learning Introspective Control (LINC) program is developing machine learning methods that show promise in making that scenario closer to reality. LINC aims to fundamentally improve the safety of mechanical systems—specifically in ground vehicles, ships, drone swarms, and robotics—using various methods that require minimal computing power. The result is an AI-powered controller the size of a cell phone.

[ DARPA ]

This survey reviews advances in 3D object detection approaches for autonomous driving. A brief introduction to 2D object detection is first discussed and drawbacks of the existing methodologies are identified for highly dynamic environments. Subsequently, this paper reviews the state-of-the-art 3D object detection techniques that utilizes monocular and stereo vision for reliable detection in urban settings. Based on depth inference basis, learning schemes, and internal representation, this work presents a method taxonomy of three classes: model-based and geometrically constrained approaches, end-to-end learning methodologies, and hybrid methods. There is highlighted segment for current trend of multi-view detectors as end-to-end methods due to their boosted robustness. Detectors from the last two kinds were specially selected to exploit the autonomous driving context in terms of geometry, scene content and instances distribution. To prove the effectiveness of each method, 3D object detection datasets for autonomous vehicles are described with their unique features, e. g., varying weather conditions, multi-modality, multi camera perspective and their respective metrics associated to different difficulty categories. In addition, we included multi-modal visual datasets, i. e., V2X that may tackle the problems of single-view occlusion. Finally, the current research trends in object detection are summarized, followed by a discussion on possible scope for future research in this domain.



You’ve seen this before: a truck unloading robot that’s made up of a mobile base with an arm on it that drives up into the back of a trailer and then uses suction to grab stacked boxes and put them onto a conveyor belt. We’ve written about a couple of the companies doing this, and there are even more out there. It’s easy to understand why—trailer unloading involves a fairly structured and controlled environment with a very repetitive task, it’s a hard job that sucks for humans, and there’s an enormous amount of demand.

While it’s likely true that there’s enough room for a whole bunch of different robotics companies in the trailer unloading space, a given customer is probably only going to pick one, and they’re going to pick the one that offers the right combination of safety, capability, and cost. Anyware Robotics thinks they have that mix, aided by a box handling solution that is both very clever and so obvious that I’m wondering why I didn’t think of it myself.

The overall design of Pixmo itself is fairly standard as far as trailer unloading robots go, but some of the details are interesting. We’re told that Pixmo is the only trailer unloading system that integrates a heavy-payload collaborative arm, actually a fairly new commercial arm from Fanuc. This means that Anyware Robotics doesn’t have to faff about with their own hardware, and also that their robot is arguably safer, being ISO certified safe to work directly with people. The base is custom, but Anyware is contracting it out to a big robotics OEM.

“We’ve put a lot of effort into making sure that most of the components of our robot are off-the-shelf,” co-founder and CEO Thomas Tang tells us. “There are already so many mature and cost-efficient suppliers that we want to offload the supply chain, the certification, the reliability testing onto someone else’s shoulders.” And while there are a selection of automated mobile robots (AMRs) out there that seem like they could get the job done, the problem is that they’re all designed for flat surfaces, and getting into and out of the back of the trailer often involves a short, steep ramp, hence the need for their own design. Even with the custom base, Tang says that Pixmo is very cost efficient, and the company predicts that it will be approximately one third the cost of other solutions with a payback of about 24 months.

But here’s the really clever bit:

Anyware Robotics Pixmo Trailer Unloading

That conveyor system in front of the boxes is an add-on that’s used in support of Pixmo. There are two benefits here: first, having the conveyor add-on aligned with the base of a box minimizes the amount of lifting that Pixmo has to do. This allows Pixmo to handle boxes of up to 65 lbs with a lift-and-slide technique, putting it at the top end of trailer unloading robot payload. And the second benefit is that the add-on system decreases the distance that Pixmo has to move the box to just about as small as it can possibly be, eliminating the need for the arm to rotate around to place a box on a conveyor next to or behind itself. Lowering this cycle time means that Pixmo can achieve a throughput of up to 1,000 boxes per hour—about one box every four seconds, which the Internet suggests is quite fast, even for a professional human. Anyware Robotics is introducing this add-on system at MODEX next week, and they have a patent pending on the idea.

This seems like such a simple, useful idea that I asked Tang why they were the first ones to come up with it. “In robotics startups, there tends to be a legacy mindset issue,” Tang told me. “When people have been working on robot arms for so many years, we just think about how to use robot arms to solve everything. That’s maybe that’s the reason why other companies didn’t come up with this solution.” Tang says that Anyware started with much more complicated add-on designs before finding this solution. “Usually it’s the most simple solution that has the most trial and error behind it.”

Anyware Robotics is focused on trailer unloading for now, but Pixmo could easily be adapted for palletizing and depalletizing or somewhat less easily for other warehouse tasks like order picking or machine tending. But why stop there? A mobile manipulator can (theoretically) do it all (almost), and that’s exactly what Tang wants:

In our long-term vision, we believe that the future will have two different types of general purpose robots. In one direction is the humanoid form, which is a really flexible solution for jobs where you want to replace a human. But there are so many jobs that are just not reasonable for a human body to do. So we believe there should be another form of general purpose robot, which is designed for industrial tasks. Our design philosophy is in that direction—it’s also general purpose, but for industrial applications.

At just over one year old, Anyware has already managed to complete a pilot program (and convert it to a purchase order). They’re currently in the middle of several other pilot programs with leading third-party logistics providers, and they expect to spend the next several months focusing on productization with the goal of releasing the first commercial version of Pixmo by July of this year.



You’ve seen this before: a truck unloading robot that’s made up of a mobile base with an arm on it that drives up into the back of a trailer and then uses suction to grab stacked boxes and put them onto a conveyor belt. We’ve written about a couple of the companies doing this, and there are even more out there. It’s easy to understand why—trailer unloading involves a fairly structured and controlled environment with a very repetitive task, it’s a hard job that sucks for humans, and there’s an enormous amount of demand.

While it’s likely true that there’s enough room for a whole bunch of different robotics companies in the trailer unloading space, a given customer is probably only going to pick one, and they’re going to pick the one that offers the right combination of safety, capability, and cost. Anyware Robotics thinks they have that mix, aided by a box handling solution that is both very clever and so obvious that I’m wondering why I didn’t think of it myself.

The overall design of Pixmo itself is fairly standard as far as trailer unloading robots go, but some of the details are interesting. We’re told that Pixmo is the only trailer unloading system that integrates a heavy-payload collaborative arm, actually a fairly new commercial arm from Fanuc. This means that Anyware Robotics doesn’t have to faff about with their own hardware, and also that their robot is arguably safer, being ISO certified safe to work directly with people. The base is custom, but Anyware is contracting it out to a big robotics OEM.

“We’ve put a lot of effort into making sure that most of the components of our robot are off-the-shelf,” co-founder and CEO Thomas Tang tells us. “There are already so many mature and cost-efficient suppliers that we want to offload the supply chain, the certification, the reliability testing onto someone else’s shoulders.” And while there are a selection of automated mobile robots (AMRs) out there that seem like they could get the job done, the problem is that they’re all designed for flat surfaces, and getting into and out of the back of the trailer often involves a short, steep ramp, hence the need for their own design. Even with the custom base, Tang says that Pixmo is very cost efficient, and the company predicts that it will be approximately one third the cost of other solutions with a payback of about 24 months.

But here’s the really clever bit:

Anyware Robotics Pixmo Trailer Unloading

That conveyor system in front of the boxes is an add-on that’s used in support of Pixmo. There are two benefits here: first, having the conveyor add-on aligned with the base of a box minimizes the amount of lifting that Pixmo has to do. This allows Pixmo to handle boxes of up to 65 lbs with a lift-and-slide technique, putting it at the top end of trailer unloading robot payload. And the second benefit is that the add-on system decreases the distance that Pixmo has to move the box to just about as small as it can possibly be, eliminating the need for the arm to rotate around to place a box on a conveyor next to or behind itself. Lowering this cycle time means that Pixmo can achieve a throughput of up to 1,000 boxes per hour—about one box every four seconds, which the Internet suggests is quite fast, even for a professional human. Anyware Robotics is introducing this add-on system at MODEX next week, and they have a patent pending on the idea.

This seems like such a simple, useful idea that I asked Tang why they were the first ones to come up with it. “In robotics startups, there tends to be a legacy mindset issue,” Tang told me. “When people have been working on robot arms for so many years, we just think about how to use robot arms to solve everything. That’s maybe that’s the reason why other companies didn’t come up with this solution.” Tang says that Anyware started with much more complicated add-on designs before finding this solution. “Usually it’s the most simple solution that has the most trial and error behind it.”

Anyware Robotics is focused on trailer unloading for now, but Pixmo could easily be adapted for palletizing and depalletizing or somewhat less easily for other warehouse tasks like order picking or machine tending. But why stop there? A mobile manipulator can (theoretically) do it all (almost), and that’s exactly what Tang wants:

In our long-term vision, we believe that the future will have two different types of general purpose robots. In one direction is the humanoid form, which is a really flexible solution for jobs where you want to replace a human. But there are so many jobs that are just not reasonable for a human body to do. So we believe there should be another form of general purpose robot, which is designed for industrial tasks. Our design philosophy is in that direction—it’s also general purpose, but for industrial applications.

At just over one year old, Anyware has already managed to complete a pilot program (and convert it to a purchase order). They’re currently in the middle of several other pilot programs with leading third-party logistics providers, and they expect to spend the next several months focusing on productization with the goal of releasing the first commercial version of Pixmo by July of this year.

Cobots are robots that are built for human-robot collaboration (HRC) in a shared environment. In the aftermath of disasters, cobots can cooperate with humans to mitigate risks and increase the possibility of rescuing people in distress. This study examines the resilient and dynamic synergy between a swarm of snake robots, first responders and people to be rescued. The possibility of delivering first aid to potential victims dispersed around a disaster environment is implemented. In the HRC simulation framework presented in this study, the first responder initially deploys a UAV, swarm of snake robots and emergency items. The UAV provides the first responder with the site planimetry, which includes the layout of the area, as well as the precise locations of the individuals in need of rescue and the aiding goods to be delivered. Each individual snake robot in the swarm is then assigned a victim. Subsequently an optimal path is determined by each snake robot using the A* algorithm, to approach and reach its respective target while avoiding obstacles. By using their prehensile capabilities, each snake robot adeptly grasps the aiding object to be dispatched. The snake robots successively arrive at the delivering location near the victim, following their optimal paths, and proceed to release the items. To demonstrate the potential of the framework, several case studies are outlined concerning the execution of operations that combine locomotion, obstacle avoidance, grasping and deploying. The Coppelia-Sim Robotic Simulator is utilised for this framework. The analysis of the motion of the snake robots on the path show highly accurate movement with and without the emergency item. This study is a step towards a holistic semi-autonomous search and rescue operation.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

HRI 2024: 11–15 March 2024, BOULDER, COLORADO, USAEurobot Open 2024: 8–11 May 2024, LA ROCHE-SUR-YON, FRANCEICRA 2024: 13–17 May 2024, YOKOHAMA, JAPANRoboCup 2024: 17–22 July 2024, EINDHOVEN, NETHERLANDS

Enjoy today’s videos!

Figure has raised a US $675 million Series B, valuing the company at $2.6 billion.

[ Figure ]

Meanwhile, here’s how things are going at Agility Robotics, whose last raise was a $150 million Series B in April of 2022.

[ Agility Robotics ]

Also meanwhile, here’s how things are going at Sanctuary AI, whose last raise was a $58.5 million Series A in March of 2022.

[ Sanctuary AI ]

The time has come for humanoid robots to enter industrial production lines and learn how to assist humans by undertaking repetitive, tedious, and potentially dangerous tasks for them. Recently, UBTECH’s humanoid robot Walker S was introduced into the assembly line of NIO’s advanced vehicle-manufacturing center, as an “intern” assisting in the car production. Walker S is the first bipedal humanoid robot to complete a specific workstation’s tasks on a mobile EV production line.

[ UBTECH ]

Henry Evans keeps working hard to make robots better, this time with the assistance of researchers from Carnegie Mellon University.

Henry said he preferred using head-worn assistive teleoperation (HAT) with a robot for certain tasks rather than depending on a caregiver. “Definitely scratching itches,” he said. “I would be happy to have it stand next to me all day, ready to do that or hold a towel to my mouth. Also, feeding me soft foods, operating the blinds, and doing odd jobs around the room.”
One innovation in particular, software called Driver Assistance that helps align the robot’s gripper with an object the user wants to pick up, was “awesome,” Henry said. Driver Assistance leaves the user in control while it makes the fine adjustments and corrections that can make controlling a robot both tedious and demanding. “That’s better than anything I have tried for grasping,” Henry said, adding that he would like to see Driver Assistance used for every interface that controls Stretch robots.

[ HAT2 ] via [ CMU ]

Watch this video for the three glorious seconds at the end.

[ Tech United ]

Get ready to rip, shear, mow, and tear, as DOOM is back! This April, we’re making the legendary game playable on our robotic mowers as a tribute to 30 years of mowing down demons.

Oh, it’s HOOSKvarna, not HUSKvarna.

[ Husqvarna ] via [ Engadget ]

Latest developments demonstrated on the Ameca Desktop platform. Having fun with vision- and voice-cloning capabilities.

[ Engineered Arts ]

Could an artificial-intelligence system learn language from a child? New York University researchers supported by the National Science Foundation, using first-person video from a head-mounted camera, trained AI models to learn language through the eyes and ears of a child.

[ NYU ]

The world’s leaders in manufacturing, natural resources, power, and utilities are using our autonomous robots to gather data of higher quality and higher quantities of data than ever before. Thousands of Spots have been deployed around the world—more than any other walking robot—to tackle this challenge. This release helps maintenance teams tap into the power of AI with new software capabilities and Spot enhancements.

[ Boston Dynamics ]

Modular self-reconfigurable robotic systems are more adaptive than conventional systems. This article proposes a novel free-form and truss-structured modular self-reconfigurable robot called FreeSN, containing node and strut modules. This article presents a novel configuration identification system for FreeSN, including connection point magnetic localization, module identification, module orientation fusion, and system-configuration fusion.

[ Freeform Robotics ]

The OOS-SIM (On-Orbit Servicing Simulator) is a simulator for on-orbit servicing tasks such as repair, maintenance and assembly that have to be carried out on satellites orbiting the earth. It simulates the operational conditions in orbit, such as the felt weightlessness and the harsh illumination.

[ DLR ]

The next CYBATHLON competition, which will take place again in 2024, breaks down barriers between the public, people with disabilities, researchers and technology developers. From 25 to 27 October 2024, the CYBATHLON will take place in a global format in the Arena Schluefweg in Kloten near Zurich and in local hubs all around the world.

[ CYBATHLON ]

George’s story is a testament to the incredible journey that unfolds when passion, opportunity and community converge. His journey from a drone enthusiast to someone actively contributing to making a difference not only to his local community but also globally; serves as a beacon of hope for all who dare to dream and pursue their passions.

[ WeRobotics ]

In case you’d forgotten, Amazon has a lot of robots.

[ Amazon Robotics ]

ABB’s fifty-year story of robotic innovation that began in 1974 with the sale of the world’s first commercial all-electric robot, the IRB 6. Björn Weichbrodt was a key figure in the development of the IRB 6.

[ ABB ]

Robotics Debate of the Ingenuity Labs Robotics and AI Symposium (RAIS2023) from October 12, 2023: Is robotics helping or hindering our progress on UN Sustainable Development Goals?

[ Ingenuity Labs ]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

HRI 2024: 11–15 March 2024, BOULDER, COLORADO, USAEurobot Open 2024: 8–11 May 2024, LA ROCHE-SUR-YON, FRANCEICRA 2024: 13–17 May 2024, YOKOHAMA, JAPANRoboCup 2024: 17–22 July 2024, EINDHOVEN, NETHERLANDS

Enjoy today’s videos!

Figure has raised a US $675 million Series B, valuing the company at $2.6 billion.

[ Figure ]

Meanwhile, here’s how things are going at Agility Robotics, whose last raise was a $150 million Series B in April of 2022.

[ Agility Robotics ]

Also meanwhile, here’s how things are going at Sanctuary AI, whose last raise was a $58.5 million Series A in March of 2022.

[ Sanctuary AI ]

The time has come for humanoid robots to enter industrial production lines and learn how to assist humans by undertaking repetitive, tedious, and potentially dangerous tasks for them. Recently, UBTECH’s humanoid robot Walker S was introduced into the assembly line of NIO’s advanced vehicle-manufacturing center, as an “intern” assisting in the car production. Walker S is the first bipedal humanoid robot to complete a specific workstation’s tasks on a mobile EV production line.

[ UBTECH ]

Henry Evans keeps working hard to make robots better, this time with the assistance of researchers from Carnegie Mellon University.

Henry said he preferred using head-worn assistive teleoperation (HAT) with a robot for certain tasks rather than depending on a caregiver. “Definitely scratching itches,” he said. “I would be happy to have it stand next to me all day, ready to do that or hold a towel to my mouth. Also, feeding me soft foods, operating the blinds, and doing odd jobs around the room.”
One innovation in particular, software called Driver Assistance that helps align the robot’s gripper with an object the user wants to pick up, was “awesome,” Henry said. Driver Assistance leaves the user in control while it makes the fine adjustments and corrections that can make controlling a robot both tedious and demanding. “That’s better than anything I have tried for grasping,” Henry said, adding that he would like to see Driver Assistance used for every interface that controls Stretch robots.

[ HAT2 ] via [ CMU ]

Watch this video for the three glorious seconds at the end.

[ Tech United ]

Get ready to rip, shear, mow, and tear, as DOOM is back! This April, we’re making the legendary game playable on our robotic mowers as a tribute to 30 years of mowing down demons.

Oh, it’s HOOSKvarna, not HUSKvarna.

[ Husqvarna ] via [ Engadget ]

Latest developments demonstrated on the Ameca Desktop platform. Having fun with vision- and voice-cloning capabilities.

[ Engineered Arts ]

Could an artificial-intelligence system learn language from a child? New York University researchers supported by the National Science Foundation, using first-person video from a head-mounted camera, trained AI models to learn language through the eyes and ears of a child.

[ NYU ]

The world’s leaders in manufacturing, natural resources, power, and utilities are using our autonomous robots to gather data of higher quality and higher quantities of data than ever before. Thousands of Spots have been deployed around the world—more than any other walking robot—to tackle this challenge. This release helps maintenance teams tap into the power of AI with new software capabilities and Spot enhancements.

[ Boston Dynamics ]

Modular self-reconfigurable robotic systems are more adaptive than conventional systems. This article proposes a novel free-form and truss-structured modular self-reconfigurable robot called FreeSN, containing node and strut modules. This article presents a novel configuration identification system for FreeSN, including connection point magnetic localization, module identification, module orientation fusion, and system-configuration fusion.

[ Freeform Robotics ]

The OOS-SIM (On-Orbit Servicing Simulator) is a simulator for on-orbit servicing tasks such as repair, maintenance and assembly that have to be carried out on satellites orbiting the earth. It simulates the operational conditions in orbit, such as the felt weightlessness and the harsh illumination.

[ DLR ]

The next CYBATHLON competition, which will take place again in 2024, breaks down barriers between the public, people with disabilities, researchers and technology developers. From 25 to 27 October 2024, the CYBATHLON will take place in a global format in the Arena Schluefweg in Kloten near Zurich and in local hubs all around the world.

[ CYBATHLON ]

George’s story is a testament to the incredible journey that unfolds when passion, opportunity and community converge. His journey from a drone enthusiast to someone actively contributing to making a difference not only to his local community but also globally; serves as a beacon of hope for all who dare to dream and pursue their passions.

[ WeRobotics ]

In case you’d forgotten, Amazon has a lot of robots.

[ Amazon Robotics ]

ABB’s fifty-year story of robotic innovation that began in 1974 with the sale of the world’s first commercial all-electric robot, the IRB 6. Björn Weichbrodt was a key figure in the development of the IRB 6.

[ ABB ]

Robotics Debate of the Ingenuity Labs Robotics and AI Symposium (RAIS2023) from October 12, 2023: Is robotics helping or hindering our progress on UN Sustainable Development Goals?

[ Ingenuity Labs ]



Today, Figure is announcing an astonishing US $675 million Series B raise, which values the company at an even more astonishing $2.6 billion. Figure is one of the companies working towards a multi or general purpose (depending on who you ask) bipedal or humanoid (depending on who you ask) robot. The astonishing thing about this valuation is that Figure’s robot is still very much in the development phase—although they’re making rapid progress, which they demonstrate in a new video posted this week.

This round of funding comes from Microsoft, OpenAI Startup Fund, Nvidia, Jeff Bezos (through Bezos Expeditions), Parkway Venture Capital, Intel Capital, Align Ventures, and ARK Invest. Figure says that they’re going to use this new capital “for scaling up AI training, robot manufacturing, expanding engineering headcount, and advancing commercial deployment efforts.” In addition, Figure and OpenAI will be collaborating on the development of “next generation AI models for humanoid robots” which will “help accelerate Figure’s commercial timeline by enhancing the capabilities of humanoid robots to process and reason from language.”

As far as that commercial timeline goes, here’s the most recent update:

Figure

And to understand everything that’s going on here, we sent a whole bunch of questions to Jenna Reher, Senior Robotics/AI Engineer at Figure.

What does “fully autonomous” mean, exactly?

Jenna Reher: In this case, we simply put the robot on the ground and hit go on the task with no other user input. What you see is using a learned vision model for bin detection that allows us to localize the robot relative to the target bin and get the bin pose. The robot can then navigate itself to within reach of the bin, determine grasp points based on the bin pose, and detect grasp success through the measured forces on the hands. Once the robot turns and sees the conveyor the rest of the task rolls out in a similar manner. By doing things in this way we can move the bins and conveyor around in the test space or start the robot from a different position and still complete the task successfully.

How many takes did it take to get this take?

Reher: We’ve been running this use case consistently for some time now as part of our work in the lab, so we didn’t really have to change much for the filming here. We did two or three practice runs in the morning and then three filming takes. All of the takes were successful, so the extras were to make sure we got the cleanest one to show.

What’s back in the Advanced Actuator Lab?

Reher: We have an awesome team of folks working on some exciting custom actuator designs for our future robots, as well as supporting and characterizing the actuators that went into our current robots.

That’s a very specific number for “speed vs human.” Which human did you measure the robot’s speed against?

Reher: We timed Brett [Adcock, founder of Figure] and a few poor engineers doing the task and took the average to get a rough baseline. If you are observant, that seemingly over-specific number is just saying we’re at 1/6 human speed. The main point that we’re trying to make here is that we are aware we are currently below human speed, and it’s an important metric to track as we improve.

What’s the tether for?

Reher: For this task we currently process the camera data off-robot while all of the behavior planning and control happens onboard in the computer that’s in the torso. Our robots should be fully tetherless in the near future as we finish packaging all of that onboard. We’ve been developing behaviors quickly in the lab here at Figure in parallel to all of the other systems engineering and integration efforts happening, so hopefully folks notice all of these subtle parallel threads converging as we try to release regular updates.

How the heck do you keep your robotics lab so clean?

Reher: Everything we’ve filmed so far is in our large robot test lab, so it’s a lot easier to keep the area clean when people’s desks aren’t intruding in the space. Definitely no guarantees on that level of cleanliness if the camera were pointed in the other direction!

Is the robot in the background doing okay?

Reher: Yes! The other robot was patiently standing there in the background, waiting for the filming to finish up so that our manipulation team could get back to training it to do more manipulation tasks. We hope we can share some more developments with that robot as the main star in the near future.

What would happen if I put a single bowling ball into that tote?

Reher: A bowling ball is particularly menacing to this task primarily due to the moving mass, in addition to the impact if you are throwing it in. The robot would in all likelihood end up dropping the tote, stay standing, and abort the task. With what you see here, we assume that the mass of the tote is known a-priori so that our whole body controller can compensate for the external forces while tracking the manipulation task. Reacting to and estimating larger unknown disturbances such as this is a challenging problem, but we’re definitely working on it.

Tell me more about that very zen arm and hand pose that the robot adopts after putting the tote on the conveyor.

Reher: It does look kind of zen! If you re-watch our coffee video you’ll notice the same pose after the robot gets things brewing. This is a reset pose that our controller will go into between manipulation tasks while the robot is awaiting commands to execute either an engineered behavior or a learned policy.

Are the fingers less fragile than they look?

Reher: They are more robust than they look, but not impervious to damage by any means. The design is pretty modular which is great, meaning that if we damage one or two fingers there is a small number of parts to swap to get everything back up and running. The current fingers won’t necessarily survive a direct impact from a bad fall, but can pick up totes and do manipulation tasks all day without issues.

Is the Figure logo footsteps?

Reher: One of the reasons I really like the figure logo is that it has a bunch of different interpretations depending on how you look at it. In some cases it’s just an F that looks like a footstep plan rollout, while some of the logo animations we have look like active stepping. One other possible interpretation could be an occupancy grid.


Today, Figure is announcing an astonishing US $675 million Series B raise, which values the company at an even more astonishing $2.6 billion. Figure is one of the companies working towards a multi or general purpose (depending on who you ask) bipedal or humanoid (depending on who you ask) robot. The astonishing thing about this valuation is that Figure’s robot is still very much in the development phase—although they’re making rapid progress, which they demonstrate in a new video posted this week.

This round of funding comes from Microsoft, OpenAI Startup Fund, Nvidia, Jeff Bezos (through Bezos Expeditions), Parkway Venture Capital, Intel Capital, Align Ventures, and ARK Invest. Figure says that they’re going to use this new capital “for scaling up AI training, robot manufacturing, expanding engineering headcount, and advancing commercial deployment efforts.” In addition, Figure and OpenAI will be collaborating on the development of “next generation AI models for humanoid robots” which will “help accelerate Figure’s commercial timeline by enhancing the capabilities of humanoid robots to process and reason from language.”

As far as that commercial timeline goes, here’s the most recent update:

Figure

And to understand everything that’s going on here, we sent a whole bunch of questions to Jenna Reher, Senior Robotics/AI Engineer at Figure.

What does “fully autonomous” mean, exactly?

Jenna Reher: In this case, we simply put the robot on the ground and hit go on the task with no other user input. What you see is using a learned vision model for bin detection that allows us to localize the robot relative to the target bin and get the bin pose. The robot can then navigate itself to within reach of the bin, determine grasp points based on the bin pose, and detect grasp success through the measured forces on the hands. Once the robot turns and sees the conveyor the rest of the task rolls out in a similar manner. By doing things in this way we can move the bins and conveyor around in the test space or start the robot from a different position and still complete the task successfully.

How many takes did it take to get this take?

Reher: We’ve been running this use case consistently for some time now as part of our work in the lab, so we didn’t really have to change much for the filming here. We did two or three practice runs in the morning and then three filming takes. All of the takes were successful, so the extras were to make sure we got the cleanest one to show.

What’s back in the Advanced Actuator Lab?

Reher: We have an awesome team of folks working on some exciting custom actuator designs for our future robots, as well as supporting and characterizing the actuators that went into our current robots.

That’s a very specific number for “speed vs human.” Which human did you measure the robot’s speed against?

Reher: We timed Brett [Adcock, founder of Figure] and a few poor engineers doing the task and took the average to get a rough baseline. If you are observant, that seemingly over-specific number is just saying we’re at 1/6 human speed. The main point that we’re trying to make here is that we are aware we are currently below human speed, and it’s an important metric to track as we improve.

What’s the tether for?

Reher: For this task we currently process the camera data off-robot while all of the behavior planning and control happens onboard in the computer that’s in the torso. Our robots should be fully tetherless in the near future as we finish packaging all of that onboard. We’ve been developing behaviors quickly in the lab here at Figure in parallel to all of the other systems engineering and integration efforts happening, so hopefully folks notice all of these subtle parallel threads converging as we try to release regular updates.

How the heck do you keep your robotics lab so clean?

Reher: Everything we’ve filmed so far is in our large robot test lab, so it’s a lot easier to keep the area clean when people’s desks aren’t intruding in the space. Definitely no guarantees on that level of cleanliness if the camera were pointed in the other direction!

Is the robot in the background doing okay?

Reher: Yes! The other robot was patiently standing there in the background, waiting for the filming to finish up so that our manipulation team could get back to training it to do more manipulation tasks. We hope we can share some more developments with that robot as the main star in the near future.

What would happen if I put a single bowling ball into that tote?

Reher: A bowling ball is particularly menacing to this task primarily due to the moving mass, in addition to the impact if you are throwing it in. The robot would in all likelihood end up dropping the tote, stay standing, and abort the task. With what you see here, we assume that the mass of the tote is known a-priori so that our whole body controller can compensate for the external forces while tracking the manipulation task. Reacting to and estimating larger unknown disturbances such as this is a challenging problem, but we’re definitely working on it.

Tell me more about that very zen arm and hand pose that the robot adopts after putting the tote on the conveyor.

Reher: It does look kind of zen! If you re-watch our coffee video you’ll notice the same pose after the robot gets things brewing. This is a reset pose that our controller will go into between manipulation tasks while the robot is awaiting commands to execute either an engineered behavior or a learned policy.

Are the fingers less fragile than they look?

Reher: They are more robust than they look, but not impervious to damage by any means. The design is pretty modular which is great, meaning that if we damage one or two fingers there is a small number of parts to swap to get everything back up and running. The current fingers won’t necessarily survive a direct impact from a bad fall, but can pick up totes and do manipulation tasks all day without issues.

Is the Figure logo footsteps?

Reher: One of the reasons I really like the figure logo is that it has a bunch of different interpretations depending on how you look at it. In some cases it’s just an F that looks like a footstep plan rollout, while some of the logo animations we have look like active stepping. One other possible interpretation could be an occupancy grid.

One of the big challenges in robotics is the generalization necessary for performing unknown tasks in unknown environments on unknown objects. For us humans, this challenge is simplified by the commonsense knowledge we can access. For cognitive robotics, representing and acquiring commonsense knowledge is a relevant problem, so we perform a systematic literature review to investigate the current state of commonsense knowledge exploitation in cognitive robotics. For this review, we combine a keyword search on six search engines with a snowballing search on six related reviews, resulting in 2,048 distinct publications. After applying pre-defined inclusion and exclusion criteria, we analyse the remaining 52 publications. Our focus lies on the use cases and domains for which commonsense knowledge is employed, the commonsense aspects that are considered, the datasets/resources used as sources for commonsense knowledge and the methods for evaluating these approaches. Additionally, we discovered a divide in terminology between research from the knowledge representation and reasoning and the cognitive robotics community. This divide is investigated by looking at the extensive review performed by Zech et al. (The International Journal of Robotics Research, 2019, 38, 518–562), with whom we have no overlapping publications despite the similar goals.

We explore an alternative approach to the design of robots that deviates from the common envisionment of having one unified agent. What if robots are depicted as an agentic ensemble where agency is distributed over different components? In the project presented here, we investigate the potential contributions of this approach to creating entertaining and joyful human-robot interaction (HRI), which also remains comprehensible to human observers. We built a service robot—which takes care of plants as a Plant-Watering Robot (PWR)—that appears as a small ship controlled by a robotic captain accompanied by kinetic elements. The goal of this narrative design, which utilizes a distributed agency approach, is to make the robot entertaining to watch and foster its acceptance. We discuss the robot’s design rationale and present observations from an exploratory study in two contrastive settings, on a university campus and in a care home for people with dementia, using a qualitative video-based approach for analysis. Our observations indicate that such a design has potential regarding the attraction, acceptance, and joyfulness it can evoke. We discuss aspects of this design approach regarding the field of elderly care, limitations of our study, and identify potential fields of use and further scopes for studies.

Objective: For transradial amputees, robotic prosthetic hands promise to regain the capability to perform daily living activities. Current control methods based on physiological signals such as electromyography (EMG) are prone to yielding poor inference outcomes due to motion artifacts, muscle fatigue, and many more. Vision sensors are a major source of information about the environment state and can play a vital role in inferring feasible and intended gestures. However, visual evidence is also susceptible to its own artifacts, most often due to object occlusion, lighting changes, etc. Multimodal evidence fusion using physiological and vision sensor measurements is a natural approach due to the complementary strengths of these modalities.

Methods: In this paper, we present a Bayesian evidence fusion framework for grasp intent inference using eye-view video, eye-gaze, and EMG from the forearm processed by neural network models. We analyze individual and fused performance as a function of time as the hand approaches the object to grasp it. For this purpose, we have also developed novel data processing and augmentation techniques to train neural network components.

Results: Our results indicate that, on average, fusion improves the instantaneous upcoming grasp type classification accuracy while in the reaching phase by 13.66% and 14.8%, relative to EMG (81.64% non-fused) and visual evidence (80.5% non-fused) individually, resulting in an overall fusion accuracy of 95.3%.

Conclusion: Our experimental data analyses demonstrate that EMG and visual evidence show complementary strengths, and as a consequence, fusion of multimodal evidence can outperform each individual evidence modality at any given time.

The robotics discipline is exploring precise and versatile solutions for upper-limb rehabilitation in Multiple Sclerosis (MS). People with MS can greatly benefit from robotic systems to help combat the complexities of this disease, which can impair the ability to perform activities of daily living (ADLs). In order to present the potential and the limitations of smart mechatronic devices in the mentioned clinical domain, this review is structured to propose a concise SWOT (Strengths, Weaknesses, Opportunities, and Threats) Analysis of robotic rehabilitation in MS. Through the SWOT Analysis, a method mostly adopted in business management, this paper addresses both internal and external factors that can promote or hinder the adoption of upper-limb rehabilitation robots in MS. Subsequently, it discusses how the synergy with another category of interaction technologies - the systems underlying virtual and augmented environments - may empower Strengths, overcome Weaknesses, expand Opportunities, and handle Threats in rehabilitation robotics for MS. The impactful adaptability of these digital settings (extensively used in rehabilitation for MS, even to approach ADL-like tasks in safe simulated contexts) is the main reason for presenting this approach to face the critical issues of the aforementioned SWOT Analysis. This methodological proposal aims at paving the way for devising further synergistic strategies based on the integration of medical robotic devices with other promising technologies to help upper-limb functional recovery in MS.

Creating an accurate model of a user’s skills is an essential task for Intelligent Tutoring Systems (ITS) and robotic tutoring systems. This allows the system to provide personalized help based on the user’s knowledge state. Most user skill modeling systems have focused on simpler tasks such as arithmetic or multiple-choice questions, where the user’s model is only updated upon task completion. These tasks have a single correct answer and they generate an unambiguous observation of the user’s answer. This is not the case for more complex tasks such as programming or engineering tasks, where the user completing the task creates a succession of noisy user observations as they work on different parts of the task. We create an algorithm called Time-Dependant Bayesian Knowledge Tracing (TD-BKT) that tracks users’ skills throughout these more complex tasks. We show in simulation that it has a more accurate model of the user’s skills and, therefore, can select better teaching actions than previous algorithms. Lastly, we show that a robot can use TD-BKT to model a user and teach electronic circuit tasks to participants during a user study. Our results show that participants significantly improved their skills when modeled using TD-BKT.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

HRI 2024: 11–15 March 2024, BOULDER, COLO.Eurobot Open 2024: 8–11 May 2024, LA ROCHE-SUR-YON, FRANCEICRA 2024: 13–17 May 2024, YOKOHAMA, JAPANRoboCup 2024: 17–22 July 2024, EINDHOVEN, NETHERLANDS

Enjoy today’s videos!

Legged robots have the potential to become vital in maintenance, home support, and exploration scenarios. In order to interact with and manipulate their environments, most legged robots are equipped with a dedicated robot arm, which means additional mass and mechanical complexity compared to standard legged robots. In this work, we explore pedipulation—using the legs of a legged robot for manipulation.

This work, by Philip Arm, Mayank Mittal, Hendrik Kolvenbach, and Marco Hutter from ETHZ RSL, will be presented at the IEEE International Conference on Robotics and Automation (ICRA 2024) in May in Japan (see events calendar above).

[ Pedipulate ]

I learned a new word today! “Stigmergy.” Stigmergy is a kind of group coordination that’s based on environmental modification. Like, when insects leave pheromone trails, they’re not directly sending messages to other individuals, but as a group the ants are able to manifest surprisingly complex coordinated behaviors. Cool, right? Researchers are IRIDIA are exploring the possibilities for robots using stigmergy with a cool ‘artificial pheromone’ system using a UV-sensitive surface.

“Automatic design of stigmergy-based behaviors for robot swarms,” by Muhammad Salman, David Garzón Ramos, and Mauro Birattari, is published in the journal Communications Engineering.

[ Nature ] via [ IRIDIA ]

Thanks, David!

Filmed in July 2017, this video shows Atlas walking through a “hatch” on a pitching surface. This uses autonomous behaviors, with the robot not knowing about the rocking world. Robot built by Boston Dynamics for the DARPA Robotics Challenge in 2013. Software by IHMC Robotics.

[ IHMC ]

That IHMC video reminded me of the SAFFiR program for Shipboard Autonomous Firefighting Robots, which is responsible for a bunch of really cool research in partnership with the United States Naval Research Laboratory. NRL did some interesting stuff with Nexi robots from MIT and made their own videos. That effort I think didn’t get nearly enough credit for being very entertaining while communicating important robotics research.

[ NRL ]

I want more robot videos with this energy.

[ MIT CSAIL ]

Large industrial asset operators increasingly use robotics to automate hazardous work at their facilities. This has led to soaring demand for autonomous inspection solutions like ANYmal. Series production by our partner Zollner enables ANYbotics to supply our customers with the required quantities of robots.

[ ANYbotics ]

This week is Grain Bin Safety Week, and Grain Weevil is here to help.

[ Grain Weevil ]

Oof, this is some heavy, heavy deep-time stuff.

[ Onkalo ]

And now, this.

[ RozenZebet ]

Hawkeye is a real time multimodal conversation and interaction agent for the Boston Dynamics’ mobile robot Spot. Leveraging OpenAI’s experimental GPT-4 Turbo and Vision AI models, Hawkeye aims to empower everyone, from seniors to healthcare professionals in forming new and unique interactions with the world around them.

That moment at 1:07 is so relatable.

[ Hawkeye ]

Wing would really prefer that if you find one of their drones on the ground, you don’t run off with it.

[ Wing ]

The rover Artemis, developed at the DFKI Robotics Innovation Center, has been equipped with a penetrometer that measures the soil’s penetration resistance to obtain precise information about soil strength. The video showcases an initial test run with the device mounted on the robot. During this test, the robot was remotely controlled, and the maximum penetration depth was limited to 15 mm.

[ DFKI ]

To efficiently achieve complex humanoid loco-manipulation tasks in industrial contexts, we propose a combined vision-based tracker-localization interplay integrated as part of a task-space whole-body optimization control. Our approach allows humanoid robots, targeted for industrial manufacturing, to manipulate and assemble large-scale objects while walking.

[ Paper ]

We developed a novel multi-body robot (called the Two-Body Bot) consisting of two small-footprint mobile bases connected by a four bar linkage where handlebars are mounted. Each base measures only 29.2 cm wide, making the robot likely the slimmest ever developed for mobile postural assistance.

[ MIT ]

Lex Fridman interviews Marc Raibert.

[ Lex Fridman ]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

HRI 2024: 11–15 March 2024, BOULDER, COLO.Eurobot Open 2024: 8–11 May 2024, LA ROCHE-SUR-YON, FRANCEICRA 2024: 13–17 May 2024, YOKOHAMA, JAPANRoboCup 2024: 17–22 July 2024, EINDHOVEN, NETHERLANDS

Enjoy today’s videos!

Legged robots have the potential to become vital in maintenance, home support, and exploration scenarios. In order to interact with and manipulate their environments, most legged robots are equipped with a dedicated robot arm, which means additional mass and mechanical complexity compared to standard legged robots. In this work, we explore pedipulation—using the legs of a legged robot for manipulation.

This work, by Philip Arm, Mayank Mittal, Hendrik Kolvenbach, and Marco Hutter from ETHZ RSL, will be presented at the IEEE International Conference on Robotics and Automation (ICRA 2024) in May in Japan (see events calendar above).

[ Pedipulate ]

I learned a new word today! “Stigmergy.” Stigmergy is a kind of group coordination that’s based on environmental modification. Like, when insects leave pheromone trails, they’re not directly sending messages to other individuals, but as a group the ants are able to manifest surprisingly complex coordinated behaviors. Cool, right? Researchers are IRIDIA are exploring the possibilities for robots using stigmergy with a cool ‘artificial pheromone’ system using a UV-sensitive surface.

“Automatic design of stigmergy-based behaviors for robot swarms,” by Muhammad Salman, David Garzón Ramos, and Mauro Birattari, is published in the journal Communications Engineering.

[ Nature ] via [ IRIDIA ]

Thanks, David!

Filmed in July 2017, this video shows Atlas walking through a “hatch” on a pitching surface. This uses autonomous behaviors, with the robot not knowing about the rocking world. Robot built by Boston Dynamics for the DARPA Robotics Challenge in 2013. Software by IHMC Robotics.

[ IHMC ]

That IHMC video reminded me of the SAFFiR program for Shipboard Autonomous Firefighting Robots, which is responsible for a bunch of really cool research in partnership with the United States Naval Research Laboratory. NRL did some interesting stuff with Nexi robots from MIT and made their own videos. That effort I think didn’t get nearly enough credit for being very entertaining while communicating important robotics research.

[ NRL ]

I want more robot videos with this energy.

[ MIT CSAIL ]

Large industrial asset operators increasingly use robotics to automate hazardous work at their facilities. This has led to soaring demand for autonomous inspection solutions like ANYmal. Series production by our partner Zollner enables ANYbotics to supply our customers with the required quantities of robots.

[ ANYbotics ]

This week is Grain Bin Safety Week, and Grain Weevil is here to help.

[ Grain Weevil ]

Oof, this is some heavy, heavy deep-time stuff.

[ Onkalo ]

And now, this.

[ RozenZebet ]

Hawkeye is a real time multimodal conversation and interaction agent for the Boston Dynamics’ mobile robot Spot. Leveraging OpenAI’s experimental GPT-4 Turbo and Vision AI models, Hawkeye aims to empower everyone, from seniors to healthcare professionals in forming new and unique interactions with the world around them.

That moment at 1:07 is so relatable.

[ Hawkeye ]

Wing would really prefer that if you find one of their drones on the ground, you don’t run off with it.

[ Wing ]

The rover Artemis, developed at the DFKI Robotics Innovation Center, has been equipped with a penetrometer that measures the soil’s penetration resistance to obtain precise information about soil strength. The video showcases an initial test run with the device mounted on the robot. During this test, the robot was remotely controlled, and the maximum penetration depth was limited to 15 mm.

[ DFKI ]

To efficiently achieve complex humanoid loco-manipulation tasks in industrial contexts, we propose a combined vision-based tracker-localization interplay integrated as part of a task-space whole-body optimization control. Our approach allows humanoid robots, targeted for industrial manufacturing, to manipulate and assemble large-scale objects while walking.

[ Paper ]

We developed a novel multi-body robot (called the Two-Body Bot) consisting of two small-footprint mobile bases connected by a four bar linkage where handlebars are mounted. Each base measures only 29.2 cm wide, making the robot likely the slimmest ever developed for mobile postural assistance.

[ MIT ]

Lex Fridman interviews Marc Raibert.

[ Lex Fridman ]



Dina Genkina: Hi. I’m Dina Genkina for IEEE Spectrum‘s Fixing the Future. Before we start, I want to tell you that you can get the latest coverage from some of Spectrum’s most important beeps, including AI, Change, and Robotics, by signing up for one of our free newsletters. Just go to spectrum.ieee.org\newsletters to subscribe. Today, a guest is Dr. Benji Maruyama, a Principal Materials Research Engineer at the Air Force Research Laboratory, or AFRL. Dr. Maruyama is a materials scientist, and his research focuses on carbon nanotubes and making research go faster. But he’s also a man with a dream, a dream of a world where science isn’t something done by a select few locked away in an ivory tower, but something most people can participate in. He hopes to start what he calls the billion scientist movement by building AI-enabled research robots that are accessible to all. Benji, thank you for coming on the show.

Benji Maruyama: Thanks, Dina. Great to be with you. I appreciate the invitation.

Genkina: Yeah. So let’s set the scene a little bit for our listeners. So you advocate for this billion scientist movement. If everything works amazingly, what would this look like? Paint us a picture of how AI will help us get there.

Maruyama: Right, great. Thanks. Yeah. So one of the things as you set the scene there is right now, to be a scientist, most people need to have access to a big lab with very expensive equipment. So I think top universities, government labs, industry folks, lots of equipment. It’s like a million dollars, right, to get one of them. And frankly, just not that many of us have access to those kinds of instruments. But at the same time, there’s probably a lot of us who want to do science, right? And so how do we make it so that anyone who wants to do science can try, can have access to instruments so that they can contribute to it. So that’s the basics behind citizen science or democratization of science so that everyone can do it. And one way to think of it is what happened with 3D printing. It used to be that in order to make something, you had to have access to a machine shop or maybe get fancy tools and dyes that could cost tens of thousands of dollars a pop. Or if you wanted to do electronics, you had to have access to very expensive equipment or services. But when 3D printers came along and became very inexpensive, all of a sudden now, anyone with access to a 3D printer, so maybe in a school or a library or a makerspace could print something out. And it could be something fun, like a game piece, but it could also be something that got you to an invention, something that was maybe useful to the community, was either a prototype or an actual working device.

And so really, 3D printing democratized manufacturing, right? It made it so that many more of us could do things that before only a select few could. And so that’s where we’re trying to go with science now, is that instead of only those of us who have access to big labs, we’re building research robots. And when I say we, we’re doing it, but now there are a lot of others who are doing it as well, and I’ll get into that. But the example that we have is that we took a 3D printer that you can buy off the internet for less than $300. Plus a couple of extra parts, a webcam, a Raspberry Pi board, and a tripod really, so only four components. You can get them all for $300. Load them with open-source software that was developed by AFIT, the Air Force Institute of Technology. So Burt Peterson and Greg Captain [inaudible]. We worked together to build this fully autonomous 3D printing robot that taught itself how to print to better than manufacturer’s specifications. So that was a really fun advance for us, and now we’re trying to take that same idea and broaden it. So I’ll turn it back over to you.

Genkina: Yeah, okay. So maybe let’s talk a little bit about this automated research robot that you’ve made. So right now, it works with a 3D printer, but is the big picture that one day it’s going to give people access to that million dollar lab? How would that look like?

Maruyama: Right, so there are different models out there. One, we just did a workshop at the University of— sorry, North Carolina State University about that very problem, right? So there’s two models. One is to get low-cost scientific tools like the 3D printer. There’s a couple of different chemistry robots, one out of University of Maryland and NIST, one out of University of Washington that are in the sort of 300 to 1,000 dollars range that makes it accessible. The other part is kind of the user facility model. So in the US, the Department of Energy National Labs have many user facilities where you can apply to get time on very expensive instruments. Now we’re talking tens of millions. For example, Brookhaven has a synchrotron light source where you can sign up and it doesn’t cost you any money to use the facility. And you can get days on that facility. And so that’s already there, but now the advances are that by using this, autonomy, autonomous closed loop experimentation, that the work that you do will be much faster and much more productive. So, for example, on ARES, our Autonomous Research System at AFRL, we actually were able to do experiments so fast that a professor who came into my lab said, it just took me aside and said, “Hey, Benji, in a week’s worth of time, I did a dissertation’s worth of research.” So maybe five years worth of research in a week. So imagine if you keep doing that week after week after week, how fast research goes. So it’s very exciting.

Genkina: Yeah, so tell us a little bit about how that works. So what’s this system that has sped up five years of research into a week and made graduate students obsolete? Not yet, not yet. How does that work? Is that the 3D printer system or is that a—

Maruyama: So we started with our system to grow carbon nanotubes. And I’ll say, actually, when we first thought about it, your comment about graduate students being absolute— obsolete, sorry, is interesting and important because, when we first built our system that worked it 100 times faster than normal, I thought that might be the case. We called it sort of graduate student out of the loop. But when I started talking with people who specialize in autonomy, it’s actually the opposite, right? It’s actually empowering graduate students to go faster and also to do the work that they want to do, right? And so just to digress a little bit, if you think about farmers before the Industrial Revolution, what were they doing? They were plowing fields with oxen and beasts of burden and hand plows. And it was hard work. And now, of course, you wouldn’t ask a farmer today to give up their tractor or their combine harvester, right? They would say, of course not. So very soon, we expect it to be the same for researchers, that if you asked a graduate student to give up their autonomous research robot five years from now, they’ll say, “Are you crazy? This is how I get my work done.”

But for our original ARES system, it worked on the synthesis of carbon nanotubes. So that meant that what we’re doing is trying to take this system that’s been pretty well studied, but we haven’t figured out how to make it at scale. So at hundreds of millions of tons per year, sort of like polyethylene production. And part of that is because it’s slow, right? One experiment takes a day, but also because there are just so many different ways to do a reaction, so many different combinations of temperature and pressure and a dozen different gases and half the periodic table as far as the catalyst. It’s just too much to just brute force your way through. So even though we went from experiments where we could do 100 experiments a day instead of one experiment a day, just that combinatorial space was vastly overwhelmed our ability to do it, even with many research robots or many graduate students. So the idea of having artificial intelligence algorithms that drive the research is key. And so that ability to do an experiment, see what happened, and then analyze it, iterate, and constantly be able to choose the optimal next best experiment to do is where ARES really shines. And so that’s what we did. ARES taught itself how to grow carbon nanotubes at controlled rates. And we were the first ones to do that for material science in our 2016 publication.

Genkina: That’s very exciting. So maybe we can peer under the hood a little bit of this AI model. How does the magic work? How does it pick the next best point to take and why it’s better than you could do as a graduate student or researcher?

Maruyama: Yeah, and so I think it’s interesting, right? In science, a lot of times we’re taught to hold everything constant, change one variable at a time, search over that entire space, see what happened, and then go back and try something else, right? So we reduce it to one variable at a time. It’s a reductionist approach. And that’s worked really well, but a lot of the problems that we want to go after are simply too complex for that reductionist approach. And so the benefit of being able to use artificial intelligence is that high dimensionality is no problem, right? Tens of dimensions search over very complex high-dimensional parameter space, which is overwhelming to humans, right? Is just basically bread and butter for AI. The other part to it is the iterative part. The beauty of doing autonomous experimentation is that you’re constantly iterating. You’re constantly learning over what just happened. You might also say, well, not only do I know what happened experimentally, but I have other sources of prior knowledge, right? So for example, ideal gas law says that this should happen, right? Or Gibbs phase rule might say, this can happen or this can’t happen. So you can use that prior knowledge to say, “Okay, I’m not going to do those experiments because that’s not going to work. I’m going to try here because this has the best chance of working.”

And within that, there are many different machine learning or artificial intelligence algorithms. Bayesian optimization is a popular one to help you choose what experiment is best. There’s also new AI that people are trying to develop to get better search.

Genkina: Cool. And so the software part of this autonomous robot is available for anyone to download, which is also really exciting. So what would someone need to do to be able to use that? Do they need to get a 3D printer and a Raspberry Pi and set it up? And what would they be able to do with it? Can they just build carbon nanotubes or can they do more stuff?

Maruyama: Right. So what we did, we built ARES OS, which is our open source software, and we’ll make sure to get you the GitHub link so that anyone can download it. And the idea behind ARES OS is that it provides a software framework for anyone to build their own autonomous research robot. And so the 3D printing example will be out there soon. But it’s the starting point. Of course, if you want to build your own new kind of robot, you still have to do the software development, for example, to link the ARES framework, the core, if you will, to your particular hardware, maybe your particular camera or 3D printer, or pipetting robot, or spectrometer, whatever that is. We have examples out there and we’re hoping to get to a point where it becomes much more user-friendly. So having direct Python connects so that you don’t— currently it’s programmed in C#. But to make it more accessible, we’d like it to be set up so that if you can do Python, you can probably have good success in building your own research robot.

Genkina: Cool. And you’re also working on a educational version of this, I understand. So what’s the status of that and what’s different about that version?

Maruyama: Yeah, right. So the educational version is going to be-- its sort of composition of a combination of hardware and software. So what we’re starting with is a low-cost 3D printer. And we’re collaborating now with the University at Buffalo, Materials Design Innovation Department. And we’re hoping to build up a robot based on a 3D printer. And we’ll see how it goes. It’s still evolving. But for example, it could be based on this very inexpensive $200 3D printer. It’s an Ender 3D printer. There’s another printer out there that’s based on University of Washington’s Jubilee printer. And that’s a very exciting development as well. So professors Lilo Pozzo and Nadya Peek at the University of Washington built this Jubilee robot with that idea of accessibility in mind. And so combining our ARES OS software with their Jubilee robot hardware is something that I’m very excited about and hope to be able to move forward on.

Genkina: What’s this Jubilee 3D printer? How is it different from a regular 3D printer?

Maruyama: It’s very open source. Not all 3D printers are open source and it’s based on a gantry system with interchangeable heads. So for example, you can get not just a 3D printing head, but other heads that might do things like do indentation, see how stiff something is, or maybe put a camera on there that can move around. And so it’s the flexibility of being able to pick different heads dynamically that I think makes it super useful. For the software, right, we have to have a good, accessible, user-friendly graphical user interface, a GUI. That takes time and effort, so we want to work on that. But again, that’s just the hardware software. Really to make ARES a good educational platform, we need to make it so that a teacher who’s interested can have the lowest activation barrier possible, right? We want she or he to be able to pull a lesson plan off of the internet, have supporting YouTube videos, and actually have the material that is a fully developed curriculum that’s mapped against state standards.

So that, right now, if you’re a teacher who— let’s face it, teachers are already overwhelmed with all that they have to do, putting something like this into their curriculum can be a lot of work, especially if you have to think about, well, I’m going to take all this time, but I also have to meet all of my teaching standards, all the state curriculum standards. And so if we build that out so that it’s a matter of just looking at the curriculum and just checking off the boxes of what state standards it maps to, then that makes it that much easier for the teacher to teach.

Genkina: Great. And what do you think is the timeline? Do you expect to be able to do this sometime in the coming year?

Maruyama: That’s right. These things always take longer than hoped for than expected, but we’re hoping to do it within this calendar year and very excited to get it going. And I would say for your listeners, if you’re interested in working together, please let me know. We’re very excited about trying to involve as many people as we can.

Genkina: Great. Okay, so you have the educational version, and you have the more research geared version, and you’re working on making this educational version more accessible. Is there something with the research version that you’re working on next, how you’re hoping to upgrade it, or is there something you’re using it for right now that you’re excited about?

There’s a number of things that we are very excited about the possibility of carbon nanotubes being produced at very large scale. So right now, people may remember carbon nanotubes as that great material that sort of never made it and was very overhyped. But there’s a core group of us who are still working on it because of the important promise of that material. So it’s material that is super strong, stiff, lightweight, electrically conductive. Much better than silicon as a digital electronics compute material. All of those great things, except we’re not making it at large enough scale. It’s actually used pretty significantly in lithium-ion batteries. It’s an important application. But other than that, it’s sort of like where’s my flying car? It’s never panned out. But there’s, as I said, a group of us who are working to really produce carbon nanotubes at much larger scale. So large scale for nanotubes now is sort of in the kilogram or ton scale. But what we need to get to is hundreds of millions of tons per year production rates. And why is that? Well, there’s a great effort that came out of ARPA-E. So the Department of Energy Advanced Research Projects Agency and the E is for Energy in that case.

So they funded a collaboration between Shell Oil and Rice University to pyrolyze methane, so natural gas into hydrogen for the hydrogen economy. So now that’s a clean burning fuel plus carbon. And instead of burning the carbon to CO2, which is what we now do, right? We just take natural gas and feed it through a turbine and generate electric power instead of— and that, by the way, generates so much CO2 that it’s causing global climate change. So if we can do that pyrolysis at scale, at hundreds of millions of tons per year, it’s literally a save the world proposition, meaning that we can avoid so much CO2 emissions that we can reduce global CO2 emissions by 20 to 40 percent. And that is the save the world proposition. It’s a huge undertaking, right? That’s a big problem to tackle, starting with the science. We still don’t have the science to efficiently and effectively make carbon nanotubes at that scale. And then, of course, we have to take the material and turn it into useful products. So the batteries is the first example, but thinking about replacing copper for electrical wire, replacing steel for structural materials, aluminum, all those kinds of applications. But we can’t do it. We can’t even get to that kind of development because we haven’t been able to make the carbon nanotubes at sufficient scale.

So I would say that’s something that I’m working on now that I’m very excited about and trying to get there, but it’s going to take some good developments in our research robots and some very smart people to get us there.

Genkina: Yeah, it seems so counterintuitive that making everything out of carbon is good for lowering carbon emissions, but I guess that’s the break.

Maruyama: Yeah, it is interesting, right? So people talk about carbon emissions, but really, the molecule that’s causing global warming is carbon dioxide, CO2, which you get from burning carbon. And so if you take that methane and parallelize it to carbon nanotubes, that carbon is now sequestered, right? It’s not going off as CO2. It’s staying in solid state. And not only is it just not going up into the atmosphere, but now we’re using it to replace steel, for example, which, by the way, steel, aluminum, copper production, all of those things emit lots of CO2 in their production, right? They’re energy intensive as a material production. So it’s kind of ironic.

Genkina: Okay, and are there any other research robots that you’re excited about that you think are also contributing to this democratization of science process?

Maruyama: Yeah, so we talked about Jubilee, the NIST robot, which is from Professor Ichiro Takeuchi at Maryland and Gilad Kusne at NIST, National Institute of Standards and Technology. Theirs is fun too. It’s LEGO as. So it’s actually based on a LEGO robotics platform. So it’s an actual chemistry robot built out of Legos. So I think that’s fun as well. And you can imagine, just like we have LEGO robot competitions, we can have autonomous research robot competitions where we try and do research through these robots or competitions where everybody sort of starts with the same robot, just like with LEGO robotics. So that’s fun as well. But I would say there’s a growing number of people doing these kinds of, first of all, low-cost science, accessible science, but in particular low-cost autonomous experimentation.

Genkina: So how far are we from a world where a high school student has an idea and they can just go and carry it out on some autonomous research system at some high-end lab?

Maruyama: That’s a really good question. I hope that it’s going to be in 5 to 10 years, that it becomes reasonably commonplace. But it’s going to take still some significant investment to get this going. And so we’ll see how that goes. But I don’t think there are any scientific impediments to getting this done. There is a significant amount of engineering to be done. And sometimes we hear, oh, it’s just engineering. The engineering is a significant problem. And it’s work to get some of these things accessible, low cost. But there are lots of great efforts. There are people who have used CDs, compact discs to make spectrometers out of. There are lots of good examples of citizen science out there. But it’s, I think, at this point, going to take investment in software, in hardware to make it accessible, and then importantly, getting students really up to speed on what AI is and how it works and how it can help them. And so I think it’s actually really important. So again, that’s the democratization of science is if we can make it available to everyone and accessible, then that helps people, everyone contribute to science. And I do believe that there are important contributions to be made by ordinary citizens, by people who aren’t you know PhDs working in a lab.

And I think there’s a lot of science out there to be done. If you ask working scientists, almost no one has run out of ideas or things they want to work on. There’s many more scientific problems to work on than we have the time where people are funding to work on. And so if we make science cheaper to do, then all of a sudden, more people can do science. And so those questions start to be resolved. And so I think that’s super important. And now we have, instead of, just those of us who work in big labs, you have millions, tens of millions, up to a billion people, that’s the billion scientist idea, who are contributing to the scientific community. And that, to me, is so powerful that many more of us can contribute than just the few of us who do it right now.

Genkina: Okay, that’s a great place to end on, I think. So, today we spoke to Dr. Benji Maruyama, a material scientist at AFRL, about his efforts to democratize scientific discovery through automated research robots. For IEEE Spectrum, I’m Dina Genkina, and I hope you’ll join us next time on Fixing the Future.



Dina Genkina: Hi. I’m Dina Genkina for IEEE Spectrum‘s Fixing the Future. Before we start, I want to tell you that you can get the latest coverage from some of Spectrum’s most important beeps, including AI, Change, and Robotics, by signing up for one of our free newsletters. Just go to spectrum.ieee.org\newsletters to subscribe. Today, a guest is Dr. Benji Maruyama, a Principal Materials Research Engineer at the Air Force Research Laboratory, or AFRL. Dr. Maruyama is a materials scientist, and his research focuses on carbon nanotubes and making research go faster. But he’s also a man with a dream, a dream of a world where science isn’t something done by a select few locked away in an ivory tower, but something most people can participate in. He hopes to start what he calls the billion scientist movement by building AI-enabled research robots that are accessible to all. Benji, thank you for coming on the show.

Benji Maruyama: Thanks, Dina. Great to be with you. I appreciate the invitation.

Genkina: Yeah. So let’s set the scene a little bit for our listeners. So you advocate for this billion scientist movement. If everything works amazingly, what would this look like? Paint us a picture of how AI will help us get there.

Maruyama: Right, great. Thanks. Yeah. So one of the things as you set the scene there is right now, to be a scientist, most people need to have access to a big lab with very expensive equipment. So I think top universities, government labs, industry folks, lots of equipment. It’s like a million dollars, right, to get one of them. And frankly, just not that many of us have access to those kinds of instruments. But at the same time, there’s probably a lot of us who want to do science, right? And so how do we make it so that anyone who wants to do science can try, can have access to instruments so that they can contribute to it. So that’s the basics behind citizen science or democratization of science so that everyone can do it. And one way to think of it is what happened with 3D printing. It used to be that in order to make something, you had to have access to a machine shop or maybe get fancy tools and dyes that could cost tens of thousands of dollars a pop. Or if you wanted to do electronics, you had to have access to very expensive equipment or services. But when 3D printers came along and became very inexpensive, all of a sudden now, anyone with access to a 3D printer, so maybe in a school or a library or a makerspace could print something out. And it could be something fun, like a game piece, but it could also be something that got you to an invention, something that was maybe useful to the community, was either a prototype or an actual working device.

And so really, 3D printing democratized manufacturing, right? It made it so that many more of us could do things that before only a select few could. And so that’s where we’re trying to go with science now, is that instead of only those of us who have access to big labs, we’re building research robots. And when I say we, we’re doing it, but now there are a lot of others who are doing it as well, and I’ll get into that. But the example that we have is that we took a 3D printer that you can buy off the internet for less than $300. Plus a couple of extra parts, a webcam, a Raspberry Pi board, and a tripod really, so only four components. You can get them all for $300. Load them with open-source software that was developed by AFIT, the Air Force Institute of Technology. So Burt Peterson and Greg Captain [inaudible]. We worked together to build this fully autonomous 3D printing robot that taught itself how to print to better than manufacturer’s specifications. So that was a really fun advance for us, and now we’re trying to take that same idea and broaden it. So I’ll turn it back over to you.

Genkina: Yeah, okay. So maybe let’s talk a little bit about this automated research robot that you’ve made. So right now, it works with a 3D printer, but is the big picture that one day it’s going to give people access to that million dollar lab? How would that look like?

Maruyama: Right, so there are different models out there. One, we just did a workshop at the University of— sorry, North Carolina State University about that very problem, right? So there’s two models. One is to get low-cost scientific tools like the 3D printer. There’s a couple of different chemistry robots, one out of University of Maryland and NIST, one out of University of Washington that are in the sort of 300 to 1,000 dollars range that makes it accessible. The other part is kind of the user facility model. So in the US, the Department of Energy National Labs have many user facilities where you can apply to get time on very expensive instruments. Now we’re talking tens of millions. For example, Brookhaven has a synchrotron light source where you can sign up and it doesn’t cost you any money to use the facility. And you can get days on that facility. And so that’s already there, but now the advances are that by using this, autonomy, autonomous closed loop experimentation, that the work that you do will be much faster and much more productive. So, for example, on ARES, our Autonomous Research System at AFRL, we actually were able to do experiments so fast that a professor who came into my lab said, it just took me aside and said, “Hey, Benji, in a week’s worth of time, I did a dissertation’s worth of research.” So maybe five years worth of research in a week. So imagine if you keep doing that week after week after week, how fast research goes. So it’s very exciting.

Genkina: Yeah, so tell us a little bit about how that works. So what’s this system that has sped up five years of research into a week and made graduate students obsolete? Not yet, not yet. How does that work? Is that the 3D printer system or is that a—

Maruyama: So we started with our system to grow carbon nanotubes. And I’ll say, actually, when we first thought about it, your comment about graduate students being absolute— obsolete, sorry, is interesting and important because, when we first built our system that worked it 100 times faster than normal, I thought that might be the case. We called it sort of graduate student out of the loop. But when I started talking with people who specialize in autonomy, it’s actually the opposite, right? It’s actually empowering graduate students to go faster and also to do the work that they want to do, right? And so just to digress a little bit, if you think about farmers before the Industrial Revolution, what were they doing? They were plowing fields with oxen and beasts of burden and hand plows. And it was hard work. And now, of course, you wouldn’t ask a farmer today to give up their tractor or their combine harvester, right? They would say, of course not. So very soon, we expect it to be the same for researchers, that if you asked a graduate student to give up their autonomous research robot five years from now, they’ll say, “Are you crazy? This is how I get my work done.”

But for our original ARES system, it worked on the synthesis of carbon nanotubes. So that meant that what we’re doing is trying to take this system that’s been pretty well studied, but we haven’t figured out how to make it at scale. So at hundreds of millions of tons per year, sort of like polyethylene production. And part of that is because it’s slow, right? One experiment takes a day, but also because there are just so many different ways to do a reaction, so many different combinations of temperature and pressure and a dozen different gases and half the periodic table as far as the catalyst. It’s just too much to just brute force your way through. So even though we went from experiments where we could do 100 experiments a day instead of one experiment a day, just that combinatorial space was vastly overwhelmed our ability to do it, even with many research robots or many graduate students. So the idea of having artificial intelligence algorithms that drive the research is key. And so that ability to do an experiment, see what happened, and then analyze it, iterate, and constantly be able to choose the optimal next best experiment to do is where ARES really shines. And so that’s what we did. ARES taught itself how to grow carbon nanotubes at controlled rates. And we were the first ones to do that for material science in our 2016 publication.

Genkina: That’s very exciting. So maybe we can peer under the hood a little bit of this AI model. How does the magic work? How does it pick the next best point to take and why it’s better than you could do as a graduate student or researcher?

Maruyama: Yeah, and so I think it’s interesting, right? In science, a lot of times we’re taught to hold everything constant, change one variable at a time, search over that entire space, see what happened, and then go back and try something else, right? So we reduce it to one variable at a time. It’s a reductionist approach. And that’s worked really well, but a lot of the problems that we want to go after are simply too complex for that reductionist approach. And so the benefit of being able to use artificial intelligence is that high dimensionality is no problem, right? Tens of dimensions search over very complex high-dimensional parameter space, which is overwhelming to humans, right? Is just basically bread and butter for AI. The other part to it is the iterative part. The beauty of doing autonomous experimentation is that you’re constantly iterating. You’re constantly learning over what just happened. You might also say, well, not only do I know what happened experimentally, but I have other sources of prior knowledge, right? So for example, ideal gas law says that this should happen, right? Or Gibbs phase rule might say, this can happen or this can’t happen. So you can use that prior knowledge to say, “Okay, I’m not going to do those experiments because that’s not going to work. I’m going to try here because this has the best chance of working.”

And within that, there are many different machine learning or artificial intelligence algorithms. Bayesian optimization is a popular one to help you choose what experiment is best. There’s also new AI that people are trying to develop to get better search.

Genkina: Cool. And so the software part of this autonomous robot is available for anyone to download, which is also really exciting. So what would someone need to do to be able to use that? Do they need to get a 3D printer and a Raspberry Pi and set it up? And what would they be able to do with it? Can they just build carbon nanotubes or can they do more stuff?

Maruyama: Right. So what we did, we built ARES OS, which is our open source software, and we’ll make sure to get you the GitHub link so that anyone can download it. And the idea behind ARES OS is that it provides a software framework for anyone to build their own autonomous research robot. And so the 3D printing example will be out there soon. But it’s the starting point. Of course, if you want to build your own new kind of robot, you still have to do the software development, for example, to link the ARES framework, the core, if you will, to your particular hardware, maybe your particular camera or 3D printer, or pipetting robot, or spectrometer, whatever that is. We have examples out there and we’re hoping to get to a point where it becomes much more user-friendly. So having direct Python connects so that you don’t— currently it’s programmed in C#. But to make it more accessible, we’d like it to be set up so that if you can do Python, you can probably have good success in building your own research robot.

Genkina: Cool. And you’re also working on a educational version of this, I understand. So what’s the status of that and what’s different about that version?

Maruyama: Yeah, right. So the educational version is going to be-- its sort of composition of a combination of hardware and software. So what we’re starting with is a low-cost 3D printer. And we’re collaborating now with the University at Buffalo, Materials Design Innovation Department. And we’re hoping to build up a robot based on a 3D printer. And we’ll see how it goes. It’s still evolving. But for example, it could be based on this very inexpensive $200 3D printer. It’s an Ender 3D printer. There’s another printer out there that’s based on University of Washington’s Jubilee printer. And that’s a very exciting development as well. So professors Lilo Pozzo and Nadya Peek at the University of Washington built this Jubilee robot with that idea of accessibility in mind. And so combining our ARES OS software with their Jubilee robot hardware is something that I’m very excited about and hope to be able to move forward on.

Genkina: What’s this Jubilee 3D printer? How is it different from a regular 3D printer?

Maruyama: It’s very open source. Not all 3D printers are open source and it’s based on a gantry system with interchangeable heads. So for example, you can get not just a 3D printing head, but other heads that might do things like do indentation, see how stiff something is, or maybe put a camera on there that can move around. And so it’s the flexibility of being able to pick different heads dynamically that I think makes it super useful. For the software, right, we have to have a good, accessible, user-friendly graphical user interface, a GUI. That takes time and effort, so we want to work on that. But again, that’s just the hardware software. Really to make ARES a good educational platform, we need to make it so that a teacher who’s interested can have the lowest activation barrier possible, right? We want she or he to be able to pull a lesson plan off of the internet, have supporting YouTube videos, and actually have the material that is a fully developed curriculum that’s mapped against state standards.

So that, right now, if you’re a teacher who— let’s face it, teachers are already overwhelmed with all that they have to do, putting something like this into their curriculum can be a lot of work, especially if you have to think about, well, I’m going to take all this time, but I also have to meet all of my teaching standards, all the state curriculum standards. And so if we build that out so that it’s a matter of just looking at the curriculum and just checking off the boxes of what state standards it maps to, then that makes it that much easier for the teacher to teach.

Genkina: Great. And what do you think is the timeline? Do you expect to be able to do this sometime in the coming year?

Maruyama: That’s right. These things always take longer than hoped for than expected, but we’re hoping to do it within this calendar year and very excited to get it going. And I would say for your listeners, if you’re interested in working together, please let me know. We’re very excited about trying to involve as many people as we can.

Genkina: Great. Okay, so you have the educational version, and you have the more research geared version, and you’re working on making this educational version more accessible. Is there something with the research version that you’re working on next, how you’re hoping to upgrade it, or is there something you’re using it for right now that you’re excited about?

There’s a number of things that we are very excited about the possibility of carbon nanotubes being produced at very large scale. So right now, people may remember carbon nanotubes as that great material that sort of never made it and was very overhyped. But there’s a core group of us who are still working on it because of the important promise of that material. So it’s material that is super strong, stiff, lightweight, electrically conductive. Much better than silicon as a digital electronics compute material. All of those great things, except we’re not making it at large enough scale. It’s actually used pretty significantly in lithium-ion batteries. It’s an important application. But other than that, it’s sort of like where’s my flying car? It’s never panned out. But there’s, as I said, a group of us who are working to really produce carbon nanotubes at much larger scale. So large scale for nanotubes now is sort of in the kilogram or ton scale. But what we need to get to is hundreds of millions of tons per year production rates. And why is that? Well, there’s a great effort that came out of ARPA-E. So the Department of Energy Advanced Research Projects Agency and the E is for Energy in that case.

So they funded a collaboration between Shell Oil and Rice University to pyrolyze methane, so natural gas into hydrogen for the hydrogen economy. So now that’s a clean burning fuel plus carbon. And instead of burning the carbon to CO2, which is what we now do, right? We just take natural gas and feed it through a turbine and generate electric power instead of— and that, by the way, generates so much CO2 that it’s causing global climate change. So if we can do that pyrolysis at scale, at hundreds of millions of tons per year, it’s literally a save the world proposition, meaning that we can avoid so much CO2 emissions that we can reduce global CO2 emissions by 20 to 40 percent. And that is the save the world proposition. It’s a huge undertaking, right? That’s a big problem to tackle, starting with the science. We still don’t have the science to efficiently and effectively make carbon nanotubes at that scale. And then, of course, we have to take the material and turn it into useful products. So the batteries is the first example, but thinking about replacing copper for electrical wire, replacing steel for structural materials, aluminum, all those kinds of applications. But we can’t do it. We can’t even get to that kind of development because we haven’t been able to make the carbon nanotubes at sufficient scale.

So I would say that’s something that I’m working on now that I’m very excited about and trying to get there, but it’s going to take some good developments in our research robots and some very smart people to get us there.

Genkina: Yeah, it seems so counterintuitive that making everything out of carbon is good for lowering carbon emissions, but I guess that’s the break.

Maruyama: Yeah, it is interesting, right? So people talk about carbon emissions, but really, the molecule that’s causing global warming is carbon dioxide, CO2, which you get from burning carbon. And so if you take that methane and parallelize it to carbon nanotubes, that carbon is now sequestered, right? It’s not going off as CO2. It’s staying in solid state. And not only is it just not going up into the atmosphere, but now we’re using it to replace steel, for example, which, by the way, steel, aluminum, copper production, all of those things emit lots of CO2 in their production, right? They’re energy intensive as a material production. So it’s kind of ironic.

Genkina: Okay, and are there any other research robots that you’re excited about that you think are also contributing to this democratization of science process?

Maruyama: Yeah, so we talked about Jubilee, the NIST robot, which is from Professor Ichiro Takeuchi at Maryland and Gilad Kusne at NIST, National Institute of Standards and Technology. Theirs is fun too. It’s LEGO as. So it’s actually based on a LEGO robotics platform. So it’s an actual chemistry robot built out of Legos. So I think that’s fun as well. And you can imagine, just like we have LEGO robot competitions, we can have autonomous research robot competitions where we try and do research through these robots or competitions where everybody sort of starts with the same robot, just like with LEGO robotics. So that’s fun as well. But I would say there’s a growing number of people doing these kinds of, first of all, low-cost science, accessible science, but in particular low-cost autonomous experimentation.

Genkina: So how far are we from a world where a high school student has an idea and they can just go and carry it out on some autonomous research system at some high-end lab?

Maruyama: That’s a really good question. I hope that it’s going to be in 5 to 10 years, that it becomes reasonably commonplace. But it’s going to take still some significant investment to get this going. And so we’ll see how that goes. But I don’t think there are any scientific impediments to getting this done. There is a significant amount of engineering to be done. And sometimes we hear, oh, it’s just engineering. The engineering is a significant problem. And it’s work to get some of these things accessible, low cost. But there are lots of great efforts. There are people who have used CDs, compact discs to make spectrometers out of. There are lots of good examples of citizen science out there. But it’s, I think, at this point, going to take investment in software, in hardware to make it accessible, and then importantly, getting students really up to speed on what AI is and how it works and how it can help them. And so I think it’s actually really important. So again, that’s the democratization of science is if we can make it available to everyone and accessible, then that helps people, everyone contribute to science. And I do believe that there are important contributions to be made by ordinary citizens, by people who aren’t you know PhDs working in a lab.

And I think there’s a lot of science out there to be done. If you ask working scientists, almost no one has run out of ideas or things they want to work on. There’s many more scientific problems to work on than we have the time where people are funding to work on. And so if we make science cheaper to do, then all of a sudden, more people can do science. And so those questions start to be resolved. And so I think that’s super important. And now we have, instead of, just those of us who work in big labs, you have millions, tens of millions, up to a billion people, that’s the billion scientist idea, who are contributing to the scientific community. And that, to me, is so powerful that many more of us can contribute than just the few of us who do it right now.

Genkina: Okay, that’s a great place to end on, I think. So, today we spoke to Dr. Benji Maruyama, a material scientist at AFRL, about his efforts to democratize scientific discovery through automated research robots. For IEEE Spectrum, I’m Dina Genkina, and I hope you’ll join us next time on Fixing the Future.

Background: Assistive Robotic Arms are designed to assist physically disabled people with daily activities. Existing joysticks and head controls are not applicable for severely disabled people such as people with Locked-in Syndrome. Therefore, eye tracking control is part of ongoing research. The related literature spans many disciplines, creating a heterogeneous field that makes it difficult to gain an overview.

Objectives: This work focuses on ARAs that are controlled by gaze and eye movements. By answering the research questions, this paper provides details on the design of the systems, a comparison of input modalities, methods for measuring the performance of these controls, and an outlook on research areas that gained interest in recent years.

Methods: This review was conducted as outlined in the PRISMA 2020 Statement. After identifying a wide range of approaches in use the authors decided to use the PRISMA-ScR extension for a scoping review to present the results. The identification process was carried out by screening three databases. After the screening process, a snowball search was conducted.

Results: 39 articles and 6 reviews were included in this article. Characteristics related to the system and study design were extracted and presented divided into three groups based on the use of eye tracking.

Conclusion: This paper aims to provide an overview for researchers new to the field by offering insight into eye tracking based robot controllers. We have identified open questions that need to be answered in order to provide people with severe motor function loss with systems that are highly useable and accessible.

Pages