Feed aggregator



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

RoboCup 2024: 17–22 July 2024, EINDHOVEN, NETHERLANDSICRA@40: 23–26 September 2024, ROTTERDAM, NETHERLANDSIROS 2024: 14–18 October 2024, ABU DHABI, UAEICSR 2024: 23–26 October 2024, ODENSE, DENMARKCybathlon 2024: 25–27 October 2024, ZURICH

Enjoy today’s videos!

We present Morphy, a novel compliant and morphologically aware flying robot that integrates sensorized flexible joints in its arms, thus enabling resilient collisions at high speeds and the ability to squeeze through openings narrower than its nominal dimensions.

Morphy represents a new class of soft flying robots that can facilitate unprecedented resilience through innovations both in the “body” and “brain.” The novel soft body can, in turn, enable new avenues for autonomy. Collisions that previously had to be avoided have now become acceptable risks, while areas that are untraversable for a certain robot size can now be negotiated through self-squeezing. These novel bodily interactions with the environment can give rise to new types of embodied intelligence.

[ ARL ]

Thanks, Kostas!

Segments of daily training for robots driven by reinforcement learning. Multiple tests done in advance for friendly service humans. The training includes some extreme tests. Please do not imitate!

[ Unitree ]

Sphero is not only still around, it’s making new STEM robots!

[ Sphero ]

Googly eyes mitigate all robot failures.

[ WVUIRL ]

Here I am, without the ability or equipment (or desire) required to iron anything that I own, and Flexiv’s got robots out there ironing fancy leather car seats.

[ Flexiv ]

Thanks, Noah!

We unveiled a significant leap forward in perception technology for our humanoid robot GR-1. The newly adapted pure-vision solution integrates bird’s-eye view, transformer models, and an occupancy network for precise and efficient environmental perception.

[ Fourier ]

Thanks, Serin!

LimX Dynamics’ humanoid robot CL-1 was launched in December 2023. It climbed stairs based on real-time terrain perception, two steps per stair. Four months later, in April 2024, the second demo video showcased CL-1 in the same scenario. It had advanced to climb the same stair, one step per stair.

[ LimX Dynamics ]

Thanks, Ou Yan!

New research from the University of Massachusetts Amherst shows that programming robots to create their own teams and voluntarily wait for their teammates results in faster task completion, with the potential to improve manufacturing, agriculture, and warehouse automation.

[ HCRL ] via [ UMass Amherst ]

Thanks, Julia!

LASDRA (Large-size Aerial Skeleton with Distributed Rotor Actuation system (ICRA18) is a scalable and modular aerial robot. It can assume a very slender, long, and dexterous form factor and is very lightweight.

[ SNU INRoL ]

We propose augmenting initially passive structures built from simple repeated cells, with novel active units to enable dynamic, shape-changing, and robotic applications. Inspired by metamaterials that can employ mechanisms, we build a framework that allows users to configure cells of this passive structure to allow it to perform complex tasks.

[ CMU ]

Testing autonomous exploration at the Exyn Office using Spot from Boston Dynamics. In this demo, Spot autonomously explores our flight space while on the hunt for one of our engineers.

[ Exyn ]

Meet Heavy Picker, the strongest robot in bulky-waste sorting and an absolute pro at lifting and sorting waste. With skills that would make a concert pianist jealous and a work ethic that never needs coffee breaks, Heavy Picker was on the lookout for new challenges.

[ Zen Robotics ]

AI is the biggest and most consequential business, financial, legal, technological, and cultural story of our time. In this panel, you will hear from the underrepresented community of women scientists who have been leading the AI revolution—from the beginning to now.

[ Stanford HAI ]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

RoboCup 2024: 17–22 July 2024, EINDHOVEN, NETHERLANDSICRA@40: 23–26 September 2024, ROTTERDAM, NETHERLANDSIROS 2024: 14–18 October 2024, ABU DHABI, UAEICSR 2024: 23–26 October 2024, ODENSE, DENMARKCybathlon 2024: 25–27 October 2024, ZURICH

Enjoy today’s videos!

We present Morphy, a novel compliant and morphologically aware flying robot that integrates sensorized flexible joints in its arms, thus enabling resilient collisions at high speeds and the ability to squeeze through openings narrower than its nominal dimensions.

Morphy represents a new class of soft flying robots that can facilitate unprecedented resilience through innovations both in the “body” and “brain.” The novel soft body can, in turn, enable new avenues for autonomy. Collisions that previously had to be avoided have now become acceptable risks, while areas that are untraversable for a certain robot size can now be negotiated through self-squeezing. These novel bodily interactions with the environment can give rise to new types of embodied intelligence.

[ ARL ]

Thanks, Kostas!

Segments of daily training for robots driven by reinforcement learning. Multiple tests done in advance for friendly service humans. The training includes some extreme tests. Please do not imitate!

[ Unitree ]

Sphero is not only still around, it’s making new STEM robots!

[ Sphero ]

Googly eyes mitigate all robot failures.

[ WVUIRL ]

Here I am, without the ability or equipment (or desire) required to iron anything that I own, and Flexiv’s got robots out there ironing fancy leather car seats.

[ Flexiv ]

Thanks, Noah!

We unveiled a significant leap forward in perception technology for our humanoid robot GR-1. The newly adapted pure-vision solution integrates bird’s-eye view, transformer models, and an occupancy network for precise and efficient environmental perception.

[ Fourier ]

Thanks, Serin!

LimX Dynamics’ humanoid robot CL-1 was launched in December 2023. It climbed stairs based on real-time terrain perception, two steps per stair. Four months later, in April 2024, the second demo video showcased CL-1 in the same scenario. It had advanced to climb the same stair, one step per stair.

[ LimX Dynamics ]

Thanks, Ou Yan!

New research from the University of Massachusetts Amherst shows that programming robots to create their own teams and voluntarily wait for their teammates results in faster task completion, with the potential to improve manufacturing, agriculture, and warehouse automation.

[ HCRL ] via [ UMass Amherst ]

Thanks, Julia!

LASDRA (Large-size Aerial Skeleton with Distributed Rotor Actuation system (ICRA18) is a scalable and modular aerial robot. It can assume a very slender, long, and dexterous form factor and is very lightweight.

[ SNU INRoL ]

We propose augmenting initially passive structures built from simple repeated cells, with novel active units to enable dynamic, shape-changing, and robotic applications. Inspired by metamaterials that can employ mechanisms, we build a framework that allows users to configure cells of this passive structure to allow it to perform complex tasks.

[ CMU ]

Testing autonomous exploration at the Exyn Office using Spot from Boston Dynamics. In this demo, Spot autonomously explores our flight space while on the hunt for one of our engineers.

[ Exyn ]

Meet Heavy Picker, the strongest robot in bulky-waste sorting and an absolute pro at lifting and sorting waste. With skills that would make a concert pianist jealous and a work ethic that never needs coffee breaks, Heavy Picker was on the lookout for new challenges.

[ Zen Robotics ]

AI is the biggest and most consequential business, financial, legal, technological, and cultural story of our time. In this panel, you will hear from the underrepresented community of women scientists who have been leading the AI revolution—from the beginning to now.

[ Stanford HAI ]



Insects have long been an inspiration for robots. The insect world is full of things that are tiny, fully autonomous, highly mobile, energy efficient, multimodal, self-repairing, and I could go on and on but you get the idea—insects are both an inspiration and a source of frustration to roboticists because it’s so hard to get robots to have anywhere close to insect capability.

We’re definitely making progress, though. In a paper published last month in IEEE Robotics and Automation Letters, roboticists from Shanghai Jong Tong University demonstrated the most bug-like robotic bug I think I’ve ever seen.

A Multi-Modal Tailless Flapping-Wing Robot www.youtube.com

Okay so it may not look the most bug-like, but it can do many very buggy bug things, including crawling, taking off horizontally, flying around (with six degrees of freedom control), hovering, landing, and self-righting if necessary. JT-fly weighs about 35 grams and has a wingspan of 33 centimeters, using four wings at once to fly at up to 5 meters per second and six legs to scurry at 0.3 m/s. Its 380 milliampere-hour battery powers it for an actually somewhat useful 8-ish minutes of flying and about 60 minutes of crawling.

While that amount of endurance may not sound like a lot, robots like these aren’t necessarily intended to be moving continuously. Rather, they move a little bit, find a nice safe perch, and then do some sensing or whatever until you ask them to move to a new spot. Ideally, most of that movement would be crawling, but having the option to fly makes JT-fly exponentially more useful.

Or, potentially more useful, because obviously this is still very much a research project. It does seem like there’s a bunch more optimization that could be done here; for example, JT-fly uses completely separate systems for flying and crawling, with two motors powering the legs and two additional motors powering the wings plus with two wing servos for control. There’s currently a limited amount of onboard autonomy, with an inertial measurement unit, barometer, and wireless communication, but otherwise not much in the way of useful payload.

Insects are both an inspiration and a source of frustration to roboticists because it’s so hard to get robots to have anywhere close to insect capability.

It won’t surprise you to learn that the researchers have disaster relief applications in mind for this robot, suggesting that “after natural disasters such as earthquakes and mudslides, roads and buildings will be severely damaged, and in these scenarios, JT-fly can rely on its flight ability to quickly deploy into the mission area.” One day, robots like these will actually be deployed for disaster relief, and although that day is not today, we’re just a little bit closer than we were before.

“A Multi-Modal Tailless Flapping-Wing Robot Capable of Flying, Crawling, Self-Righting and Horizontal Takeoff,” by Chaofeng Wu, Yiming Xiao, Jiaxin Zhao, Jiawang Mou, Feng Cui, and Wu Liu from Shanghai Jong Tong University, is published in the May issue of IEEE Robotics and Automation Letters.


Insects have long been an inspiration for robots. The insect world is full of things that are tiny, fully autonomous, highly mobile, energy efficient, multimodal, self-repairing, and I could go on and on but you get the idea—insects are both an inspiration and a source of frustration to roboticists because it’s so hard to get robots to have anywhere close to insect capability.

We’re definitely making progress, though. In a paper published last month in IEEE Robotics and Automation Letters, roboticists from Shanghai Jong Tong University demonstrated the most bug-like robotic bug I think I’ve ever seen.

A Multi-Modal Tailless Flapping-Wing Robot www.youtube.com

Okay so it may not look the most bug-like, but it can do many very buggy bug things, including crawling, taking off horizontally, flying around (with six degrees of freedom control), hovering, landing, and self-righting if necessary. JT-fly weighs about 35 grams and has a wingspan of 33 centimeters, using four wings at once to fly at up to 5 meters per second and six legs to scurry at 0.3 m/s. Its 380 milliampere-hour battery powers it for an actually somewhat useful 8-ish minutes of flying and about 60 minutes of crawling.

While that amount of endurance may not sound like a lot, robots like these aren’t necessarily intended to be moving continuously. Rather, they move a little bit, find a nice safe perch, and then do some sensing or whatever until you ask them to move to a new spot. Ideally, most of that movement would be crawling, but having the option to fly makes JT-fly exponentially more useful.

Or, potentially more useful, because obviously this is still very much a research project. It does seem like there’s a bunch more optimization that could be done here; for example, JT-fly uses completely separate systems for flying and crawling, with two motors powering the legs and two additional motors powering the wings plus with two wing servos for control. There’s currently a limited amount of onboard autonomy, with an inertial measurement unit, barometer, and wireless communication, but otherwise not much in the way of useful payload.

Insects are both an inspiration and a source of frustration to roboticists because it’s so hard to get robots to have anywhere close to insect capability.

It won’t surprise you to learn that the researchers have disaster relief applications in mind for this robot, suggesting that “after natural disasters such as earthquakes and mudslides, roads and buildings will be severely damaged, and in these scenarios, JT-fly can rely on its flight ability to quickly deploy into the mission area.” One day, robots like these will actually be deployed for disaster relief, and although that day is not today, we’re just a little bit closer than we were before.

“A Multi-Modal Tailless Flapping-Wing Robot Capable of Flying, Crawling, Self-Righting and Horizontal Takeoff,” by Chaofeng Wu, Yiming Xiao, Jiaxin Zhao, Jiawang Mou, Feng Cui, and Wu Liu from Shanghai Jong Tong University, is published in the May issue of IEEE Robotics and Automation Letters.

In this study, we address the critical need for enhanced situational awareness and victim detection capabilities in Search and Rescue (SAR) operations amidst disasters. Traditional unmanned ground vehicles (UGVs) often struggle in such chaotic environments due to their limited manoeuvrability and the challenge of distinguishing victims from debris. Recognising these gaps, our research introduces a novel technological framework that integrates advanced gesture-recognition with cutting-edge deep learning for camera-based victim identification, specifically designed to empower UGVs in disaster scenarios. At the core of our methodology is the development and implementation of the Meerkat Optimization Algorithm—Stacked Convolutional Neural Network—Bi—Long Short Term Memory—Gated Recurrent Unit (MOA-SConv-Bi-LSTM-GRU) model, which sets a new benchmark for hand gesture detection with its remarkable performance metrics: accuracy, precision, recall, and F1-score all approximately 0.9866. This model enables intuitive, real-time control of UGVs through hand gestures, allowing for precise navigation in confined and obstacle-ridden spaces, which is vital for effective SAR operations. Furthermore, we leverage the capabilities of the latest YOLOv8 deep learning model, trained on specialised datasets to accurately detect human victims under a wide range of challenging conditions, such as varying occlusions, lighting, and perspectives. Our comprehensive testing in simulated emergency scenarios validates the effectiveness of our integrated approach. The system demonstrated exceptional proficiency in navigating through obstructions and rapidly locating victims, even in environments with visual impairments like smoke, clutter, and poor lighting. Our study not only highlights the critical gaps in current SAR response capabilities but also offers a pioneering solution through a synergistic blend of gesture-based control, deep learning, and purpose-built robotics. The key findings underscore the potential of our integrated technological framework to significantly enhance UGV performance in disaster scenarios, thereby optimising life-saving outcomes when time is of the essence. This research paves the way for future advancements in SAR technology, with the promise of more efficient and reliable rescue operations in the face of disaster.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

RoboCup 2024: 17–22 July 2024, EINDHOVEN, NETHERLANDSICRA@40: 23–26 September 2024, ROTTERDAM, NETHERLANDSIROS 2024: 14–18 October 2024, ABU DHABI, UAEICSR 2024: 23–26 October 2024, ODENSE, DENMARKCybathlon 2024: 25–27 October 2024, ZURICH

Enjoy today’s videos!

There’s a Canadian legend about a flying canoe, because of course there is. The legend involves drunkenness, a party with some ladies, swearing, and a pact with the devil, because of course it does. Fortunately for the drone in this video, it needs none of that to successfully land on this (nearly) flying canoe, just some high-friction shock absorbing legs and judicious application of reverse thrust.

[ Createk ]

Thanks, Alexis!

This paper summarizes an autonomous driving project by musculoskeletal humanoids. The musculoskeletal humanoid, which mimics the human body in detail, has redundant sensors and a flexible body structure. We reconsider the developed hardware and software of the musculoskeletal humanoid Musashi in the context of autonomous driving. The respective components of autonomous driving are conducted using the benefits of the hardware and software. Finally, Musashi succeeded in the pedal and steering wheel operations with recognition.

[ Paper ] via [ JSK Lab ]

Thanks, Kento!

Robust AI has been kinda quiet for the last little while, but their Carter robot continues to improve.

[ Robust AI ]

One of the key arguments for building robots that have similar form factors to human beings is that we can leverage the massive human data for training. In this paper, we introduce a full-stack system for humanoids to learn motion and autonomous skills from human data. We demonstrate the system on our customized 33-degrees-of-freedom 180 centimeter humanoid, autonomously completing tasks such as wearing a shoe to stand up and walk, unloading objects from warehouse racks, folding a sweatshirt, rearranging objects, typing, and greeting another robot with 60-100 percent success rates using up to 40 demonstrations.

[ HumanPlus ]

We present OmniH2O (Omni Human-to-Humanoid), a learning-based system for whole-body humanoid teleoperation and autonomy. Using kinematic pose as a universal control interface, OmniH2O enables various ways for a human to control a full-sized humanoid with dexterous hands, including using real-time teleoperation through VR headset, verbal instruction, and RGB camera. OmniH2O also enables full autonomy by learning from teleoperated demonstrations or integrating with frontier models such as GPT-4.

[ OmniH2O ]

A collaboration between Boxbot, Agility Robotics, and Robust.AI at Playground Global. Make sure and watch until the end to hear the roboticists in the background react when the demo works in a very roboticist way.

::clap clap clap:: yaaaaayyyyy....

[ Robust AI ]

The use of drones and robotic devices threatens civilian and military actors in conflict areas. We started trials with robots to see how we can adapt our HEAT (Hostile Environment Awareness Training) courses to this new reality.

[ CSD ]

Thanks, Ebe!

How to make humanoids do versatile parkour jumping, clapping dance, cliff traversal, and box pick-and-move with a unified RL framework? We introduce WoCoCo: Whole-body humanoid Control with sequential Contacts

[ WoCoCo ]

A selection of excellent demos from the Learning Systems and Robotics Lab at TUM and the University of Toronto.

[ Learning Systems and Robotics Lab ]

Harvest Automation, one of the OG autonomous mobile robot companies, hasn’t updated their website since like 2016, but some videos just showed up on YouTube this week.

[ Harvest Automation ]

Northrop Grumman has been pioneering capabilities in the undersea domain for more than 50 years. Now, we are creating a new class of uncrewed underwater vehicles (UUV) with Manta Ray. Taking its name from the massive “winged” fish, Manta Ray will operate long-duration, long-range missions in ocean environments where humans can’t go.

[ Northrop Grumman ]

Akara Robotics’ autonomous robotic UV disinfection demo.

[ Akara Robotics ]

Scientists have computationally predicted hundreds of thousands of novel materials that could be promising for new technologies—but testing to see whether any of those materials can be made in reality is a slow process. Enter A-Lab, which uses robots guided by artificial intelligence to speed up the process.

[ A-Lab ]

We wrote about this research from CMU a while back, but here’s a quite nice video.

[ CMU RI ]

Aw yiss pick and place robots.

[ Fanuc ]

Axel Moore describes his lab’s work in orthopedic biomechanics to relieve joint pain with robotic assistance.

[ CMU ]

The field of humanoid robots has grown in recent years with several companies and research laboratories developing new humanoid systems. However, the number of running robots did not noticeably rise. Despite the need for fast locomotion to quickly serve given tasks, which require traversing complex terrain by running and jumping over obstacles. To provide an overview of the design of humanoid robots with bioinspired mechanisms, this paper introduces the fundamental functions of the human running gait.

[ Paper ]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

RoboCup 2024: 17–22 July 2024, EINDHOVEN, NETHERLANDSICRA@40: 23–26 September 2024, ROTTERDAM, NETHERLANDSIROS 2024: 14–18 October 2024, ABU DHABI, UAEICSR 2024: 23–26 October 2024, ODENSE, DENMARKCybathlon 2024: 25–27 October 2024, ZURICH

Enjoy today’s videos!

There’s a Canadian legend about a flying canoe, because of course there is. The legend involves drunkenness, a party with some ladies, swearing, and a pact with the devil, because of course it does. Fortunately for the drone in this video, it needs none of that to successfully land on this (nearly) flying canoe, just some high-friction shock absorbing legs and judicious application of reverse thrust.

[ Createk ]

Thanks, Alexis!

This paper summarizes an autonomous driving project by musculoskeletal humanoids. The musculoskeletal humanoid, which mimics the human body in detail, has redundant sensors and a flexible body structure. We reconsider the developed hardware and software of the musculoskeletal humanoid Musashi in the context of autonomous driving. The respective components of autonomous driving are conducted using the benefits of the hardware and software. Finally, Musashi succeeded in the pedal and steering wheel operations with recognition.

[ Paper ] via [ JSK Lab ]

Thanks, Kento!

Robust AI has been kinda quiet for the last little while, but their Carter robot continues to improve.

[ Robust AI ]

One of the key arguments for building robots that have similar form factors to human beings is that we can leverage the massive human data for training. In this paper, we introduce a full-stack system for humanoids to learn motion and autonomous skills from human data. We demonstrate the system on our customized 33-degrees-of-freedom 180 centimeter humanoid, autonomously completing tasks such as wearing a shoe to stand up and walk, unloading objects from warehouse racks, folding a sweatshirt, rearranging objects, typing, and greeting another robot with 60-100 percent success rates using up to 40 demonstrations.

[ HumanPlus ]

We present OmniH2O (Omni Human-to-Humanoid), a learning-based system for whole-body humanoid teleoperation and autonomy. Using kinematic pose as a universal control interface, OmniH2O enables various ways for a human to control a full-sized humanoid with dexterous hands, including using real-time teleoperation through VR headset, verbal instruction, and RGB camera. OmniH2O also enables full autonomy by learning from teleoperated demonstrations or integrating with frontier models such as GPT-4.

[ OmniH2O ]

A collaboration between Boxbot, Agility Robotics, and Robust.AI at Playground Global. Make sure and watch until the end to hear the roboticists in the background react when the demo works in a very roboticist way.

::clap clap clap:: yaaaaayyyyy....

[ Robust AI ]

The use of drones and robotic devices threatens civilian and military actors in conflict areas. We started trials with robots to see how we can adapt our HEAT (Hostile Environment Awareness Training) courses to this new reality.

[ CSD ]

Thanks, Ebe!

How to make humanoids do versatile parkour jumping, clapping dance, cliff traversal, and box pick-and-move with a unified RL framework? We introduce WoCoCo: Whole-body humanoid Control with sequential Contacts

[ WoCoCo ]

A selection of excellent demos from the Learning Systems and Robotics Lab at TUM and the University of Toronto.

[ Learning Systems and Robotics Lab ]

Harvest Automation, one of the OG autonomous mobile robot companies, hasn’t updated their website since like 2016, but some videos just showed up on YouTube this week.

[ Harvest Automation ]

Northrop Grumman has been pioneering capabilities in the undersea domain for more than 50 years. Now, we are creating a new class of uncrewed underwater vehicles (UUV) with Manta Ray. Taking its name from the massive “winged” fish, Manta Ray will operate long-duration, long-range missions in ocean environments where humans can’t go.

[ Northrop Grumman ]

Akara Robotics’ autonomous robotic UV disinfection demo.

[ Akara Robotics ]

Scientists have computationally predicted hundreds of thousands of novel materials that could be promising for new technologies—but testing to see whether any of those materials can be made in reality is a slow process. Enter A-Lab, which uses robots guided by artificial intelligence to speed up the process.

[ A-Lab ]

We wrote about this research from CMU a while back, but here’s a quite nice video.

[ CMU RI ]

Aw yiss pick and place robots.

[ Fanuc ]

Axel Moore describes his lab’s work in orthopedic biomechanics to relieve joint pain with robotic assistance.

[ CMU ]

The field of humanoid robots has grown in recent years with several companies and research laboratories developing new humanoid systems. However, the number of running robots did not noticeably rise. Despite the need for fast locomotion to quickly serve given tasks, which require traversing complex terrain by running and jumping over obstacles. To provide an overview of the design of humanoid robots with bioinspired mechanisms, this paper introduces the fundamental functions of the human running gait.

[ Paper ]

The performance of the robotic manipulator is negatively impacted by outside disturbances and uncertain parameters. The system’s variables are also highly coupled, complex, and nonlinear, indicating that it is a multi-input, multi-output system. Therefore, it is necessary to develop a controller that can control the variables in the system in order to handle these complications. This work proposes six control structures based on neural networks (NNs) with proportional integral derivative (PID) and fractional-order PID (FOPID) controllers to operate a 2-link rigid robot manipulator (2-LRRM) for trajectory tracking. These are named as set-point-weighted PID (W-PID), set-point weighted FOPID (W-FOPID), recurrent neural network (RNN)-like PID (RNNPID), RNN-like FOPID (RNN-FOPID), NN+PID, and NN+FOPID controllers. The zebra optimization algorithm (ZOA) was used to adjust the parameters of the proposed controllers while reducing the integral-time-square error (ITSE). A new objective function was proposed for tuning to generate controllers with minimal chattering in the control signal. After implementing the proposed controller designs, a comparative robustness study was conducted among these controllers by altering the initial conditions, disturbances, and model uncertainties. The simulation results demonstrate that the NN+FOPID controller has the best trajectory tracking performance with the minimum ITSE and best robustness against changes in the initial states, external disturbances, and parameter uncertainties compared to the other controllers.

Implementing and deploying advanced technologies are principal in improving manufacturing processes, signifying a transformative stride in the industrial sector. Computer vision plays a crucial innovation role during this technological advancement, demonstrating broad applicability and profound impact across various industrial operations. This pivotal technology is not merely an additive enhancement but a revolutionary approach that redefines quality control, automation, and operational efficiency parameters in manufacturing landscapes. By integrating computer vision, industries are positioned to optimize their current processes significantly and spearhead innovations that could set new standards for future industrial endeavors. However, the integration of computer vision in these contexts necessitates comprehensive training programs for operators, given this advanced system’s complexity and abstract nature. Historically, training modalities have grappled with the complexities of understanding concepts as advanced as computer vision. Despite these challenges, computer vision has recently surged to the forefront across various disciplines, attributed to its versatility and superior performance, often matching or exceeding the capabilities of other established technologies. Nonetheless, there is a noticeable knowledge gap among students, particularly in comprehending the application of Artificial Intelligence (AI) within Computer Vision. This disconnect underscores the need for an educational paradigm transcending traditional theoretical instruction. Cultivating a more practical understanding of the symbiotic relationship between AI and computer vision is essential. To address this, the current work proposes a project-based instructional approach to bridge the educational divide. This methodology will enable students to engage directly with the practical aspects of computer vision applications within AI. By guiding students through a hands-on project, they will learn how to effectively utilize a dataset, train an object detection model, and implement it within a microcomputer infrastructure. This immersive experience is intended to bolster theoretical knowledge and provide a practical understanding of deploying AI techniques within computer vision. The main goal is to equip students with a robust skill set that translates into practical acumen, preparing a competent workforce to navigate and innovate in the complex landscape of Industry 4.0. This approach emphasizes the criticality of adapting educational strategies to meet the evolving demands of advanced technological infrastructures. It ensures that emerging professionals are adept at harnessing the potential of transformative tools like computer vision in industrial settings.

This paper introduces DAC-HRC, a novel cognitive architecture designed to optimize human-robot collaboration (HRC) in industrial settings, particularly within the context of Industry 4.0. The architecture is grounded in the Distributed Adaptive Control theory and the principles of joint intentionality and interdependence, which are key to effective HRC. Joint intentionality refers to the shared goals and mutual understanding between a human and a robot, while interdependence emphasizes the reliance on each other’s capabilities to complete tasks. DAC-HRC is applied to a hybrid recycling plant for the disassembly and recycling of Waste Electrical and Electronic Equipment (WEEE) devices. The architecture incorporates several cognitive modules operating at different timescales and abstraction levels, fostering adaptive collaboration that is personalized to each human user. The effectiveness of DAC-HRC is demonstrated through several pilot studies, showcasing functionalities such as turn-taking interaction, personalized error-handling mechanisms, adaptive safety measures, and gesture-based communication. These features enhance human-robot collaboration in the recycling plant by promoting real-time robot adaptation to human needs and preferences. The DAC-HRC architecture aims to contribute to the development of a new HRC paradigm by paving the way for more seamless and efficient collaboration in Industry 4.0 by relying on socially adept cognitive architectures.



The original version of this post by Benjie Holson was published on Substack here, and includes Benjie’s original comics as part of his series on robots and startups.

I worked on this idea for months before I decided it was a mistake. The second time I heard someone mention it, I thought, “That’s strange, these two groups had the same idea. Maybe I should tell them it didn’t work for us.” The third and fourth time I rolled my eyes and ignored it. The fifth time I heard about a group struggling with this mistake, I decided it was worth a blog post all on its own. I call this idea “The Mythical Non-Roboticist.”

The Mistake

The idea goes something like this: Programming robots is hard. And there are some people with really arcane skills and PhDs who are really expensive and seem to be required for some reason. Wouldn’t it be nice if we could do robotics without them? 1 What if everyone could do robotics? That would be great, right? We should make a software framework so that non-roboticists can program robots.

This idea is so close to a correct idea that it’s hard to tell why it doesn’t work out. On the surface, it’s not wrong: All else being equal, it would be good if programming robots was more accessible. The problem is that we don’t have a good recipe for making working robots. So we don’t know how to make that recipe easier to follow. In order to make things simple, people end up removing things that folks might need, because no one knows for sure what’s absolutely required. It’s like saying you want to invent an invisibility cloak and want to be able to make it from materials you can buy from Home Depot. Sure, that would be nice, but if you invented an invisibility cloak that required some mercury and neodymium to manufacture would you toss the recipe?

In robotics, this mistake is based on a very true and very real observation: Programming robots is super hard. Famously hard. It would be super great if programming robots was easier. The issue is this: Programming robots has two different kinds of hard parts.

Robots are hard because the world is complicated

Moor Studio/Getty Images

The first kind of hard part is that robots deal with the real world, imperfectly sensed and imperfectly actuated. Global mutable state is bad programming style because it’s really hard to deal with, but to robot software the entire physical world is global mutable state, and you only get to unreliably observe it and hope your actions approximate what you wanted to achieve. Getting robotics to work at all is often at the very limit of what a person can reason about, and requires the flexibility to employ whatever heuristic might work for your special problem. This is the intrinsic complexity of the problem: Robots live in complex worlds, and for every working solution there are millions of solutions that don’t work, and finding the right one is hard, and often very dependent on the task, robot, sensors, and environment.

Folks look at that challenge, see that it is super hard, and decide that, sure, maybe some fancy roboticist could solve it in one particular scenario, but what about “normal” people? “We should make this possible for non-roboticists” they say. I call these users “Mythical Non-Roboticists” because once they are programming a robot, I feel they become roboticists. Isn’t anyone programming a robot for a purpose a roboticist? Stop gatekeeping, people.

Don’t design for amorphous groups

I call also them “mythical” because usually the “non-roboticist” implied is a vague, amorphous group. Don’t design for amorphous groups. If you can’t name three real people (that you have talked to) that your API is for, then you are designing for an amorphous group and only amorphous people will like your API.

And with this hazy group of users in mind (and seeing how difficult everything is), folks think, “Surely we could make this easier for everyone else by papering over these things with simple APIs?”

No. No you can’t. Stop it.

You can’t paper over intrinsic complexity with simple APIs because if your APIs are simple they can’t cover the complexity of the problem. You will inevitably end up with a beautiful looking API, with calls like “grasp_object” and “approach_person” which demo nicely in a hackathon kickoff but last about 15 minutes of someone actually trying to get some work done. It will turn out that, for their particular application, “grasp_object()” makes 3 or 4 wrong assumptions about “grasp” and “object” and doesn’t work for them at all.

Your users are just as smart as you

This is made worse by the pervasive assumption that these people are less savvy (read: less intelligent) than the creators of this magical framework. 2 That feeling of superiority will cause the designers to cling desperately to their beautiful, simple “grasp_object()”s and resist adding the knobs and arguments needed to cover more use cases and allow the users to customize what they get.

Ironically this foists a bunch of complexity on to the poor users of the API who have to come up with clever workarounds to get it to work at all.

Moor Studio/Getty Images

The sad, salty, bitter icing on this cake-of-frustration is that, even if done really well, the goal of this kind of framework would be to expand the group of people who can do the work. And to achieve that, it would sacrifice some performance you can only get by super-specializing your solution to your problem. If we lived in a world where expert roboticists could program robots that worked really well, but there was so much demand for robots that there just wasn’t enough time for those folks to do all the programming, this would be a great solution. 3

The obvious truth is that (outside of really constrained environments like manufacturing cells) even the very best collection of real bone-fide, card-carrying roboticists working at the best of their ability struggle to get close to a level of performance that makes the robots commercially viable, even with long timelines and mountains of funding. 4 We don’t have any headroom to sacrifice power and effectiveness for ease.

What problem are we solving?

So should we give up making it easier? Is robotic development available only to a small group of elites with fancy PhDs? 5 No to both! I have worked with tons of undergrad interns who have been completely able to do robotics.6 I myself am mostly self-taught in robot programming.7 While there is a lot of intrinsic complexity in making robots work, I don’t think there is any more than, say, video game development.

In robotics, like in all things, experience helps, some things are teachable, and as you master many areas you can see things start to connect together. These skills are not magical or unique to robotics. We are not as special as we like to think we are.

But what about making programming robots easier? Remember way back at the beginning of the post when I said that there were two different kinds of hard parts? One is the intrinsic complexity of the problem, and that one will be hard no matter what. 8 But the second is the incidental complexity, or as I like to call it, the stupid BS complexity.

Stupid BS Complexity

Robots are asynchronous, distributed, real-time systems with weird hardware. All of that will be hard to configure for stupid BS reasons. Those drivers need to work in the weird flavor of Linux you want for hard real-time for your controls and getting that all set up will be hard for stupid BS reasons. You are abusing Wi-Fi so you can roam seamlessly without interruption but Linux’s Wi-Fi will not want to do that. Your log files are huge and you have to upload them somewhere so they don’t fill up your robot. You’ll need to integrate with some cloud something or other and deal with its stupid BS. 9

Moor Studio/Getty Images

There is a ton of crap to deal with before you even get to complexity of dealing with 3D rotation, moving reference frames, time synchronization, messaging protocols. Those things have intrinsic complexity (you have to think about when something was observed and how to reason about it as other things have moved) and stupid BS complexity (There’s a weird bug because someone multiplied two transform matrices in the wrong order and now you’re getting an error message that deep in some protocol a quaternion is not normalized. WTF does that mean?) 10

One of the biggest challenges of robot programming is wading through the sea of stupid BS you need to wrangle in order to start working on your interesting and challenging robotics problem.

So a simple heuristic to make good APIs is:

Design your APIs for someone as smart as you, but less tolerant of stupid BS.

That feels universal enough that I’m tempted to call it Holson’s Law of Tolerable API Design.

When you are using tools you’ve made, you know them well enough to know the rough edges and how to avoid them.

But rough edges are things that have to be held in a programmer’s memory while they are using your system. If you insist on making a robotics framework 11, you should strive to make it as powerful as you can with the least amount of stupid BS. Eradicate incidental complexity everywhere you can. You want to make APIs that have maximum flexibility but good defaults. I like python’s default-argument syntax for this because it means you can write APIs that can be used like:

It is possible to have easy things be simple and allow complex things. And please, please, please don’t make condescending APIs. Thanks!

1. Ironically it is very often the expensive arcane-knowledge-having PhDs who are proposing this.

2. Why is it always a framework?

3. The exception that might prove the rule is things like traditional manufacturing-cell automation. That is a place where the solutions exist, but the limit to expanding is set up cost. I’m not an expert in this domain, but I’d worry that physical installation and safety compliance might still dwarf the software programming cost, though.

4. As I well know from personal experience.

5. Or non-fancy PhDs for that matter?

6. I suspect that many bright highschoolers would also be able to do the work. Though, as Google tends not to hire them, I don’t have good examples.

7. My schooling was in Mechanical Engineering and I never got a PhD, though my ME classwork did include some programming fundamentals.

8. Unless we create effective general purpose AI. It feels weird that I have to add that caveat, but the possibility that it’s actually coming for robotics in my lifetime feels much more possible than it did two years ago.

9. And if you are unlucky, its API was designed by someone who thought they were smarter than their customers.

10. This particular flavor of BS complexity is why I wrote posetree.py. If you do robotics, you should check it out.

11. Which, judging by the trail of dead robot-framework-companies, is a fraught thing to do.



The original version of this post by Benjie Holson was published on Substack here, and includes Benjie’s original comics as part of his series on robots and startups.

I worked on this idea for months before I decided it was a mistake. The second time I heard someone mention it, I thought, “That’s strange, these two groups had the same idea. Maybe I should tell them it didn’t work for us.” The third and fourth time I rolled my eyes and ignored it. The fifth time I heard about a group struggling with this mistake, I decided it was worth a blog post all on its own. I call this idea “The Mythical Non-Roboticist.”

The Mistake

The idea goes something like this: Programming robots is hard. And there are some people with really arcane skills and PhDs who are really expensive and seem to be required for some reason. Wouldn’t it be nice if we could do robotics without them? 1 What if everyone could do robotics? That would be great, right? We should make a software framework so that non-roboticists can program robots.

This idea is so close to a correct idea that it’s hard to tell why it doesn’t work out. On the surface, it’s not wrong: All else being equal, it would be good if programming robots was more accessible. The problem is that we don’t have a good recipe for making working robots. So we don’t know how to make that recipe easier to follow. In order to make things simple, people end up removing things that folks might need, because no one knows for sure what’s absolutely required. It’s like saying you want to invent an invisibility cloak and want to be able to make it from materials you can buy from Home Depot. Sure, that would be nice, but if you invented an invisibility cloak that required some mercury and neodymium to manufacture would you toss the recipe?

In robotics, this mistake is based on a very true and very real observation: Programming robots is super hard. Famously hard. It would be super great if programming robots was easier. The issue is this: Programming robots has two different kinds of hard parts.

Robots are hard because the world is complicated

Moor Studio/Getty Images

The first kind of hard part is that robots deal with the real world, imperfectly sensed and imperfectly actuated. Global mutable state is bad programming style because it’s really hard to deal with, but to robot software the entire physical world is global mutable state, and you only get to unreliably observe it and hope your actions approximate what you wanted to achieve. Getting robotics to work at all is often at the very limit of what a person can reason about, and requires the flexibility to employ whatever heuristic might work for your special problem. This is the intrinsic complexity of the problem: Robots live in complex worlds, and for every working solution there are millions of solutions that don’t work, and finding the right one is hard, and often very dependent on the task, robot, sensors, and environment.

Folks look at that challenge, see that it is super hard, and decide that, sure, maybe some fancy roboticist could solve it in one particular scenario, but what about “normal” people? “We should make this possible for non-roboticists” they say. I call these users “Mythical Non-Roboticists” because once they are programming a robot, I feel they become roboticists. Isn’t anyone programming a robot for a purpose a roboticist? Stop gatekeeping, people.

Don’t design for amorphous groups

I call also them “mythical” because usually the “non-roboticist” implied is a vague, amorphous group. Don’t design for amorphous groups. If you can’t name three real people (that you have talked to) that your API is for, then you are designing for an amorphous group and only amorphous people will like your API.

And with this hazy group of users in mind (and seeing how difficult everything is), folks think, “Surely we could make this easier for everyone else by papering over these things with simple APIs?”

No. No you can’t. Stop it.

You can’t paper over intrinsic complexity with simple APIs because if your APIs are simple they can’t cover the complexity of the problem. You will inevitably end up with a beautiful looking API, with calls like “grasp_object” and “approach_person” which demo nicely in a hackathon kickoff but last about 15 minutes of someone actually trying to get some work done. It will turn out that, for their particular application, “grasp_object()” makes 3 or 4 wrong assumptions about “grasp” and “object” and doesn’t work for them at all.

Your users are just as smart as you

This is made worse by the pervasive assumption that these people are less savvy (read: less intelligent) than the creators of this magical framework. 2 That feeling of superiority will cause the designers to cling desperately to their beautiful, simple “grasp_object()”s and resist adding the knobs and arguments needed to cover more use cases and allow the users to customize what they get.

Ironically this foists a bunch of complexity on to the poor users of the API who have to come up with clever workarounds to get it to work at all.

Moor Studio/Getty Images

The sad, salty, bitter icing on this cake-of-frustration is that, even if done really well, the goal of this kind of framework would be to expand the group of people who can do the work. And to achieve that, it would sacrifice some performance you can only get by super-specializing your solution to your problem. If we lived in a world where expert roboticists could program robots that worked really well, but there was so much demand for robots that there just wasn’t enough time for those folks to do all the programming, this would be a great solution. 3

The obvious truth is that (outside of really constrained environments like manufacturing cells) even the very best collection of real bone-fide, card-carrying roboticists working at the best of their ability struggle to get close to a level of performance that makes the robots commercially viable, even with long timelines and mountains of funding. 4 We don’t have any headroom to sacrifice power and effectiveness for ease.

What problem are we solving?

So should we give up making it easier? Is robotic development available only to a small group of elites with fancy PhDs? 5 No to both! I have worked with tons of undergrad interns who have been completely able to do robotics.6 I myself am mostly self-taught in robot programming.7 While there is a lot of intrinsic complexity in making robots work, I don’t think there is any more than, say, video game development.

In robotics, like in all things, experience helps, some things are teachable, and as you master many areas you can see things start to connect together. These skills are not magical or unique to robotics. We are not as special as we like to think we are.

But what about making programming robots easier? Remember way back at the beginning of the post when I said that there were two different kinds of hard parts? One is the intrinsic complexity of the problem, and that one will be hard no matter what. 8 But the second is the incidental complexity, or as I like to call it, the stupid BS complexity.

Stupid BS Complexity

Robots are asynchronous, distributed, real-time systems with weird hardware. All of that will be hard to configure for stupid BS reasons. Those drivers need to work in the weird flavor of Linux you want for hard real-time for your controls and getting that all set up will be hard for stupid BS reasons. You are abusing Wi-Fi so you can roam seamlessly without interruption but Linux’s Wi-Fi will not want to do that. Your log files are huge and you have to upload them somewhere so they don’t fill up your robot. You’ll need to integrate with some cloud something or other and deal with its stupid BS. 9

Moor Studio/Getty Images

There is a ton of crap to deal with before you even get to complexity of dealing with 3D rotation, moving reference frames, time synchronization, messaging protocols. Those things have intrinsic complexity (you have to think about when something was observed and how to reason about it as other things have moved) and stupid BS complexity (There’s a weird bug because someone multiplied two transform matrices in the wrong order and now you’re getting an error message that deep in some protocol a quaternion is not normalized. WTF does that mean?) 10

One of the biggest challenges of robot programming is wading through the sea of stupid BS you need to wrangle in order to start working on your interesting and challenging robotics problem.

So a simple heuristic to make good APIs is:

Design your APIs for someone as smart as you, but less tolerant of stupid BS.

That feels universal enough that I’m tempted to call it Holson’s Law of Tolerable API Design.

When you are using tools you’ve made, you know them well enough to know the rough edges and how to avoid them.

But rough edges are things that have to be held in a programmer’s memory while they are using your system. If you insist on making a robotics framework 11, you should strive to make it as powerful as you can with the least amount of stupid BS. Eradicate incidental complexity everywhere you can. You want to make APIs that have maximum flexibility but good defaults. I like python’s default-argument syntax for this because it means you can write APIs that can be used like:

It is possible to have easy things be simple and allow complex things. And please, please, please don’t make condescending APIs. Thanks!

1. Ironically it is very often the expensive arcane-knowledge-having PhDs who are proposing this.

2. Why is it always a framework?

3. The exception that might prove the rule is things like traditional manufacturing-cell automation. That is a place where the solutions exist, but the limit to expanding is set up cost. I’m not an expert in this domain, but I’d worry that physical installation and safety compliance might still dwarf the software programming cost, though.

4. As I well know from personal experience.

5. Or non-fancy PhDs for that matter?

6. I suspect that many bright highschoolers would also be able to do the work. Though, as Google tends not to hire them, I don’t have good examples.

7. My schooling was in Mechanical Engineering and I never got a PhD, though my ME classwork did include some programming fundamentals.

8. Unless we create effective general purpose AI. It feels weird that I have to add that caveat, but the possibility that it’s actually coming for robotics in my lifetime feels much more possible than it did two years ago.

9. And if you are unlucky, its API was designed by someone who thought they were smarter than their customers.

10. This particular flavor of BS complexity is why I wrote posetree.py. If you do robotics, you should check it out.

11. Which, judging by the trail of dead robot-framework-companies, is a fraught thing to do.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

RoboCup 2024: 17–22 July 2024, EINDHOVEN, NETHERLANDSICRA@40: 23–26 September 2024, ROTTERDAM, NETHERLANDSIROS 2024: 14–18 October 2024, ABU DHABI, UNITED ARAB EMIRATESICSR 2024: 23–26 October 2024, ODENSE, DENMARKCybathlon 2024: 25–27 October 2024, ZURICH

Enjoy today’s videos!

In this video, you see the start of 1X’s development of an advanced AI system that chains simple tasks into complex actions using voice commands, allowing seamless multi-robot control and remote operation. By starting with single-task models, we ensure smooth transitions to more powerful unified models, ultimately aiming to automate high-level actions using AI.

This video does not contain teleoperation, computer graphics, cuts, video speedups, or scripted trajectory playback. It’s all controlled via neural networks.

[ 1X ]

As the old adage goes, one cannot claim to be a true man without a visit to the Great Wall of China. XBot-L, a full-sized humanoid robot developed by Robot Era, recently acquitted itself well in a walk along sections of the Great Wall.

[ Robot Era ]

The paper presents a novel rotary wing platform, that is capable of folding and expanding its wings during flight. Our source of inspiration came from birds’ ability to fold their wings to navigate through small spaces and dive. The design of the rotorcraft is based on the monocopter platform, which is inspired by the flight of Samara seeds.

[ AirLab ]

We present a variable stiffness robotic skin (VSRS), a concept that integrates stiffness-changing capabilities, sensing, and actuation into a single, thin modular robot design. Reconfiguring, reconnecting, and reshaping VSRSs allows them to achieve new functions both on and in the absence of a host body.

[ Yale Faboratory ]

Heimdall is a new rover design for the 2024 University Rover Challenge (URC). This video shows highlights of Heimdall’s trip during the four missions at URC 2024.

Heimdall features a split body design with whegs (wheel legs), and a drill for sub-surface sample collection. It also has the ability to manipulate a variety of objects, collect surface samples, and perform onboard spectrometry and chemical tests.

[ WVU ]

I think this may be the first time I’ve seen an autonomous robot using a train? This one is delivering lunch boxes!

[ JSME ]

The AI system used identifies and separates red apples from green apples, after which a robotic arm picks up the red apples identified with a qb SoftHand Industry and gently places them in a basket.

My favorite part is the magnetic apple stem system.

[ QB Robotics ]

DexNex (v0, June 2024) is an anthropomorphic teleoperation testbed for dexterous manipulation at the Center for Robotics and Biosystems at Northwestern University. DexNex recreates human upper-limb functionality through a near 1-to-1 mapping between Operator movements and Avatar actions.

Motion of the Operator’s arms, hands, fingers, and head are fed forward to the Avatar, while fingertip pressures, finger forces, and camera images are fed back to the Operator. DexNex aims to minimize the latency of each subsystem to provide a seamless, immersive, and responsive user experience. Future research includes gaining a better understanding of the criticality of haptic and vision feedback for different manipulation tasks; providing arm-level grounded force feedback; and using machine learning to transfer dexterous skills from the human to the robot.

[ Northwestern ]

Sometimes the best path isn’t the smoothest or straightest surface, it’s the path that’s actually meant to be a path.

[ RaiLab ]

Fulfilling a school requirement by working in a Romanian locomotive factory one week each month, Daniela Rus learned to operate “machines that help us make things.” Appreciation for the practical side of math and science stuck with Daniela, who is now Director of the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL).

[ MIT ]

For AI to achieve its full potential, non-experts need to be let into the development process, says Rumman Chowdhury, CEO and cofounder of Humane Intelligence. She tells the story of farmers fighting for the right to repair their own AI-powered tractors (which some manufacturers actually made illegal), proposing everyone should have the ability to report issues, patch updates or even retrain AI technologies for their specific uses.

[ TED ]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

RoboCup 2024: 17–22 July 2024, EINDHOVEN, NETHERLANDSICRA@40: 23–26 September 2024, ROTTERDAM, NETHERLANDSIROS 2024: 14–18 October 2024, ABU DHABI, UNITED ARAB EMIRATESICSR 2024: 23–26 October 2024, ODENSE, DENMARKCybathlon 2024: 25–27 October 2024, ZURICH

Enjoy today’s videos!

In this video, you see the start of 1X’s development of an advanced AI system that chains simple tasks into complex actions using voice commands, allowing seamless multi-robot control and remote operation. By starting with single-task models, we ensure smooth transitions to more powerful unified models, ultimately aiming to automate high-level actions using AI.

This video does not contain teleoperation, computer graphics, cuts, video speedups, or scripted trajectory playback. It’s all controlled via neural networks.

[ 1X ]

As the old adage goes, one cannot claim to be a true man without a visit to the Great Wall of China. XBot-L, a full-sized humanoid robot developed by Robot Era, recently acquitted itself well in a walk along sections of the Great Wall.

[ Robot Era ]

The paper presents a novel rotary wing platform, that is capable of folding and expanding its wings during flight. Our source of inspiration came from birds’ ability to fold their wings to navigate through small spaces and dive. The design of the rotorcraft is based on the monocopter platform, which is inspired by the flight of Samara seeds.

[ AirLab ]

We present a variable stiffness robotic skin (VSRS), a concept that integrates stiffness-changing capabilities, sensing, and actuation into a single, thin modular robot design. Reconfiguring, reconnecting, and reshaping VSRSs allows them to achieve new functions both on and in the absence of a host body.

[ Yale Faboratory ]

Heimdall is a new rover design for the 2024 University Rover Challenge (URC). This video shows highlights of Heimdall’s trip during the four missions at URC 2024.

Heimdall features a split body design with whegs (wheel legs), and a drill for sub-surface sample collection. It also has the ability to manipulate a variety of objects, collect surface samples, and perform onboard spectrometry and chemical tests.

[ WVU ]

I think this may be the first time I’ve seen an autonomous robot using a train? This one is delivering lunch boxes!

[ JSME ]

The AI system used identifies and separates red apples from green apples, after which a robotic arm picks up the red apples identified with a qb SoftHand Industry and gently places them in a basket.

My favorite part is the magnetic apple stem system.

[ QB Robotics ]

DexNex (v0, June 2024) is an anthropomorphic teleoperation testbed for dexterous manipulation at the Center for Robotics and Biosystems at Northwestern University. DexNex recreates human upper-limb functionality through a near 1-to-1 mapping between Operator movements and Avatar actions.

Motion of the Operator’s arms, hands, fingers, and head are fed forward to the Avatar, while fingertip pressures, finger forces, and camera images are fed back to the Operator. DexNex aims to minimize the latency of each subsystem to provide a seamless, immersive, and responsive user experience. Future research includes gaining a better understanding of the criticality of haptic and vision feedback for different manipulation tasks; providing arm-level grounded force feedback; and using machine learning to transfer dexterous skills from the human to the robot.

[ Northwestern ]

Sometimes the best path isn’t the smoothest or straightest surface, it’s the path that’s actually meant to be a path.

[ RaiLab ]

Fulfilling a school requirement by working in a Romanian locomotive factory one week each month, Daniela Rus learned to operate “machines that help us make things.” Appreciation for the practical side of math and science stuck with Daniela, who is now Director of the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL).

[ MIT ]

For AI to achieve its full potential, non-experts need to be let into the development process, says Rumman Chowdhury, CEO and cofounder of Humane Intelligence. She tells the story of farmers fighting for the right to repair their own AI-powered tractors (which some manufacturers actually made illegal), proposing everyone should have the ability to report issues, patch updates or even retrain AI technologies for their specific uses.

[ TED ]

Soft robots exhibit complex nonlinear dynamics with large degrees of freedom, making their modelling and control challenging. Typically, reduced-order models in time or space are used in addressing these challenges, but the resulting simplification limits soft robot control accuracy and restricts their range of motion. In this work, we introduce an end-to-end learning-based approach for fully dynamic modelling of any general robotic system that does not rely on predefined structures, learning dynamic models of the robot directly in the visual space. The generated models possess identical dimensionality to the observation space, resulting in models whose complexity is determined by the sensory system without explicitly decomposing the problem. To validate the effectiveness of our proposed method, we apply it to a fully soft robotic manipulator, and we demonstrate its applicability in controller development through an open-loop optimization-based controller. We achieve a wide range of dynamic control tasks including shape control, trajectory tracking and obstacle avoidance using a model derived from just 90 min of real-world data. Our work thus far provides the most comprehensive strategy for controlling a general soft robotic system, without constraints on the shape, properties, or dimensionality of the system.

Introduction: Robots present an opportunity to enhance healthcare delivery. Rather than targeting complete automation and nurse replacement, collaborative robots, or “cobots”, might be designed to allow nurses to focus on high-value caregiving. While many institutions are now investing in these platforms, there is little publicly available data on how cobots are being developed, implemented, and evaluated to determine if and how they support nursing practice in the real world.

Methods: This systematic review investigates the current state of cobotic technologies designed to assist nurses in hospital settings, their intended applications, and impacts on nurses and patient care. A comprehensive database search identified 28 relevant peer-reviewed articles published since 2018 which involve real studies with robotic platforms in simulated or actual clinical contexts.

Results: Few cobots were explicitly designed to reduce nursing workload through administrative or logistical assistance. Most included studies were designed as patient-centered rather than nurse-centered, but included assistance for tasks like medication delivery, vital monitoring, and social interaction. Most applications emerged from India, with limited evidence from the United States despite commercial availability of nurse-assistive cobots. Robots ranged from proof-of-concept to commercially deployed systems.

Discussion: This review highlights the need for further published studies on cobotic development and evaluation. A larger body of evidence is needed to recognize current limitations and pragmatic opportunities to assist nurses and patients using state-of-the-art robotics. Human-centered design can assist in discovering the right opportunities for cobotic assistance. Committed research-practice partnerships and human-centered design are needed to guide the technical development of nurse-centered cobotic solutions.

The Expanded Endoscopic Endonasal Approach, one of the best examples of endoscopic neurosurgery, allows access to the skull base through the natural orifice of the nostril. Current standard instruments lack articulation limiting operative access and surgeon dexterity, and thus, could benefit from robotic articulation. In this study, a handheld robotic system with a series of detachable end-effectors for this approach is presented. This system is comprised of interchangeable articulated 2/3 degrees-of-freedom 3 mm instruments that expand the operative workspace and enhance the surgeon’s dexterity, an ergonomically designed handheld controller with a rotating joystick-body that can be placed at the position most comfortable for the user, and the accompanying control box. The robotic instruments were experimentally evaluated for their workspace, structural integrity, and force-delivery capabilities. The entire system was then tested in a pre-clinical context during a phantom feasibility test, followed up by a cadaveric pilot study by a cohort of surgeons of varied clinical experience. Results from this series of experiments suggested enhanced dexterity and adequate robustness that could be associated with feasibility in a clinical context, as well as improvement over current neurosurgical instruments.

Social technology can improve the quality of social lives of older adults (OAs) and mitigate negative mental and physical health outcomes. When people engage with technology, they can do so to stimulate social interaction (stimulation hypothesis) or disengage from their real world (disengagement hypothesis), according to Nowland et al.‘s model of the relationship between social Internet use and loneliness. External events, such as large periods of social isolation like during the COVID-19 pandemic, can also affect whether people use technology in line with the stimulation or disengagement hypothesis. We examined how the COVID-19 pandemic affected the social challenges OAs faced and their expectations for robot technology to solve their challenges. We conducted two participatory design (PD) workshops with OAs during and after the COVID-19 pandemic. During the pandemic, OAs’ primary concern was distanced communication with family members, with a prevalent desire to assist them through technology. They also wanted to share experiences socially, as such OA’s attitude toward technology could be explained mostly by the stimulation hypothesis. However, after COVID-19 the pandemic, their focus shifted towards their own wellbeing. Social isolation and loneliness were already significant issues for OAs, and these were exacerbated by the COVID-19 pandemic. Therefore, such OAs’ attitudes toward technology after the pandemic could be explained mostly by the disengagement hypothesis. This clearly reflect the OA’s current situation that they have been getting further digitally excluded due to rapid technological development during the pandemic. Both during and after the pandemic, OAs found it important to have technologies that were easy to use, which would reduce their digital exclusion. After the pandemic, we found this especially in relation to newly developed technologies meant to help people keep at a distance. To effectively integrate these technologies and avoid excluding large parts of the population, society must address the social challenges faced by OAs.

Pages