Feed aggregator

Despite the advances in mobile robotics, the introduction of autonomous robots in human-populated environments is rather slow. One of the fundamental reasons is the acceptance of robots by people directly affected by a robot’s presence. Understanding human behavior and dynamics is essential for planning when and how robots should traverse busy environments without disrupting people’s natural motion and causing irritation. Research has exploited various techniques to build spatio-temporal representations of people’s presence and flows and compared their applicability to plan optimal paths in the future. Many comparisons of how dynamic map-building techniques show how one method compares on a dataset versus another, but without consistent datasets and high-quality comparison metrics, it is difficult to assess how these various methods compare as a whole and in specific tasks. This article proposes a methodology for creating high-quality criteria with interpretable results for comparing long-term spatio-temporal representations for human-aware path planning and human-aware navigation scheduling. Two criteria derived from the methodology are then applied to compare the representations built by the techniques found in the literature. The approaches are compared on a real-world, long-term dataset, and the conception is validated in a field experiment on a robotic platform deployed in a human-populated environment. Our results indicate that continuous spatio-temporal methods independently modeling spatial and temporal phenomena outperformed other modeling approaches. Our results provide a baseline for future work to compare a wide range of methods employed for long-term navigation and provide researchers with an understanding of how these various methods compare in various scenarios.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

RoboCup 2022: 11 July–17 July 2022, BANGKOKIEEE CASE 2022: 20 August–24 August 2022, MEXICO CITYCLAWAR 2022: 12 September–14 September 2022, AZORES, PORTUGALANA Avatar XPRIZE Finals: 4 November–5 November 2022, LOS ANGELESCoRL 2022: 14 December–18 December 2022, AUCKLAND, NEW ZEALAND

Enjoy today’s videos!

According to Google Translate, these PoKeBo Cubes chat amongst themselves to communicate useful information to you. Like, they’ll talk to each other about current events and the weather, which you’ll pick up just by being nearby and low-key listening in.

[ ICD Lab ]

This video demonstrates human-multirobot collaborative mobile manipulation with the Omnid mobile collaborate robots, or "mocobots" for short. Key features of mocobots include passive compliance, for the safety of the human and the payload, and high-fidelity end effector force control independent of the potentially imprecise motions of the mobile base.

[ Paper ]

I’m not sure how autonomous this thing actually is, but it looks like a heck of a lot of fun to ride around on anyway.

[ KAERI Robot Lab ]

Recently Digit took in some of the sights around our Pittsburgh office. There’s a lot to explore out there and no shortage of robot enthusiasts. With a strong lineage connected to Carnegie Mellon University, we’re proud to maintain a presence in this great city.

I’ll be honest: I wanted a little more to happen with the bowling.

[ Agility ]

Some supercool research from the Max Planck Institute and ETH Zurich presented at CoRL 2022: teaching legged robots agile behaviors through direct physical demonstration.

[ CoRL2022 ]

SRI has an enduring legacy in the field of robotics. In 1995 SRI licensed its telemanipulation software to Intuitive Surgical, which became the foundation for the da Vinci surgical robot. Since then, researchers at SRI have continued to optimize the system to work with today’s commercial robot arms, bringing high dexterity telemanipulation to a wide range of industries.

[ SRI ]

Meet Josh, a mechanical engineer at Boston Dynamics. Hear his perspective on what goes into building a robot—from learning the right skills, to collaborating across teams, to designing and testing new parts.

[ Boston Dynamics ]

NSF Science Now: Learn how mechanical engineers are developing new prosthetic legs with a natural, stable walking motion and how swallowing a robot could lead to more effective medical drug delivery in the body.

[ NSF ]

We present a model-based optimization framework that optimizes base pose and footholds simultaneously. It can generate motions in rough environments for a variety of different gaits in real time.

[ Paper ]

With traditional techniques, training robots often requires hundreds of hours of data, but this is not a practical way to train robots on every variation of a task. U-M researchers used data augmentation to develop a method that will expand these data sets.

[ Paper ]

If you missed RSS this year, all the livestreams are up. Here’s (most of) Day 1, and Day 2 and Day 3 can be found on the RSS YouTube channel.

[ RSS on YouTube ]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

RoboCup 2022: 11 July–17 July 2022, BANGKOKIEEE CASE 2022: 20 August–24 August 2022, MEXICO CITYCLAWAR 2022: 12 September–14 September 2022, AZORES, PORTUGALANA Avatar XPRIZE Finals: 4 November–5 November 2022, LOS ANGELESCoRL 2022: 14 December–18 December 2022, AUCKLAND, NEW ZEALAND

Enjoy today’s videos!

According to Google Translate, these PoKeBo Cubes chat amongst themselves to communicate useful information to you. Like, they’ll talk to each other about current events and the weather, which you’ll pick up just by being nearby and low-key listening in.

[ ICD Lab ]

This video demonstrates human-multirobot collaborative mobile manipulation with the Omnid mobile collaborate robots, or "mocobots" for short. Key features of mocobots include passive compliance, for the safety of the human and the payload, and high-fidelity end effector force control independent of the potentially imprecise motions of the mobile base.

[ Paper ]

I’m not sure how autonomous this thing actually is, but it looks like a heck of a lot of fun to ride around on anyway.

[ KAERI Robot Lab ]

Recently Digit took in some of the sights around our Pittsburgh office. There’s a lot to explore out there and no shortage of robot enthusiasts. With a strong lineage connected to Carnegie Mellon University, we’re proud to maintain a presence in this great city.

I’ll be honest: I wanted a little more to happen with the bowling.

[ Agility ]

Some supercool research from the Max Planck Institute and ETH Zurich presented at CoRL 2022: teaching legged robots agile behaviors through direct physical demonstration.

[ CoRL2022 ]

SRI has an enduring legacy in the field of robotics. In 1995 SRI licensed its telemanipulation software to Intuitive Surgical, which became the foundation for the da Vinci surgical robot. Since then, researchers at SRI have continued to optimize the system to work with today’s commercial robot arms, bringing high dexterity telemanipulation to a wide range of industries.

[ SRI ]

Meet Josh, a mechanical engineer at Boston Dynamics. Hear his perspective on what goes into building a robot—from learning the right skills, to collaborating across teams, to designing and testing new parts.

[ Boston Dynamics ]

NSF Science Now: Learn how mechanical engineers are developing new prosthetic legs with a natural, stable walking motion and how swallowing a robot could lead to more effective medical drug delivery in the body.

[ NSF ]

We present a model-based optimization framework that optimizes base pose and footholds simultaneously. It can generate motions in rough environments for a variety of different gaits in real time.

[ Paper ]

With traditional techniques, training robots often requires hundreds of hours of data, but this is not a practical way to train robots on every variation of a task. U-M researchers used data augmentation to develop a method that will expand these data sets.

[ Paper ]

If you missed RSS this year, all the livestreams are up. Here’s (most of) Day 1, and Day 2 and Day 3 can be found on the RSS YouTube channel.

[ RSS on YouTube ]

This article describes an approach for multiagent search planning for a team of agents. A team of UAVs tasked to conduct a forest fire search was selected as the use case, although solutions are applicable to other domains. Fixed-path (e.g., parallel track) methods for multiagent search can produce predictable and structured paths, with the main limitation being poor management of agents’ resources and limited adaptability (i.e., based on predefined geometric paths, e.g., parallel track, expanding square, etc.). On the other hand, pseudorandom methods allow agents to generate well-separated paths; but methods can be computationally expensive and can result in a lack of coordination of agents’ activities. We present a hybrid solution that exploits the complementary strengths of fixed-pattern and pseudorandom methods, i.e., an approach that is resource-efficient, predictable, adaptable, and scalable. Our approach evolved from the Delaunay triangulation of systematically selected waypoints to allocate agents to explore a specific region while optimizing a given set of mission constraints. We implement our approach in a simulation environment, comparing the performance of the proposed algorithm with fixed-path and pseudorandom baselines. Results proved agents’ resource utilization, predictability, scalability, and adaptability of the developed path. We also demonstrate the proposed algorithm’s application on real UAVs.

Industrial contexts, typically characterized by highly unstructured environments, where task sequences are difficult to hard-code and unforeseen events occur daily (e.g., oil and gas, energy generation, aeronautics) cannot completely rely upon automation to substitute the human dexterity and judgment skills. Robots operating in these conditions have the common requirement of being able to deploy appropriate behaviours in highly dynamic and unpredictable environments, while aiming to achieve a more natural human-robot interaction and a broad range of acceptability in providing useful and efficient services. The goal of this paper is to introduce a deliberative framework able to acquire, reuse and instantiate a collection of behaviours that promote an extension of the autonomy periods of mobile robotic platforms, with a focus on maintenance, repairing and overhaul applications. Behavior trees are employed to design the robotic system’s high-level deliberative intelligence, which integrates: social behaviors, aiming to capture the human’s emotional state and intention; the ability to either perform or support various process tasks; seamless planning and execution of human-robot shared work plans. In particular, the modularity, reactiveness and deliberation capacity that characterize the behaviour tree formalism are leveraged to interpret the human’s health and cognitive load for supporting her/him, and to complete a shared mission by collaboration or complete take-over. By enabling mobile robotic platforms to take-over risky jobs which the human cannot, should not or do not want to perform the proposed framework bears high potential to significantly improve the safety, productivity and efficiency in harsh working environments.

To enable the application of humanoid robots outside of laboratory environments, the biped must meet certain requirements. These include, in particular, coping with dynamic motions such as climbing stairs or ramps or walking over irregular terrain. Sit-to-stand transitions also belong to this category. In addition to their actual application such as getting out of vehicles or standing up after sitting, for example, at a table, these motions also provide benefits in terms of performance assessment. Therefore, they have long been used as a sports medical and geriatric assessment for humans. Here, we develop optimized sit-to-stand trajectories using optimal control, which are characterized by their dynamic and humanlike nature. We implement these motions on the humanoid robot REEM-C. Based on the obtained sensor data, we present a unified benchmarking procedure based on two different experimental protocols. These protocols are characterized by their increasing level of difficulty for quantifying different aspects of lower limb performance. We report performance results obtained by REEM-C using two categories of indicators: primary, scenario-specific indicators that assess overall performance (chair height and ankle-to-chair distance) and subsidiary, general indicators that further describe performance. The latter provide a more detailed analysis of the applied motion and are based on metrics such as the angular momentum, zero moment point, capture point, or foot placement estimator. In the process, we identify performance deficiencies of the robot based on the collected data. Thus, this work is an important step toward a unified quantification of bipedal performance in the execution of humanlike and dynamically demanding motions.

The formal description and verification of networks of cooperative and interacting agents is made difficult by the interplay of several different behavioral patterns, models of communication, scalability issues. In this paper, we will explore the functionalities and the expressiveness of a general-purpose process algebraic framework for the specification and model checking based analysis of collective and cooperative systems. The proposed syntactic and semantic schemes are general enough to be adapted with small modifications to heterogeneous application domains, like, e.g., crowdsourcing systems, trustworthy networks, and distributed ledger technologies.



Garbage is a global problem that each of us contributes to. Since the 1970s, we've all been told we can help fix that problem by assiduously recycling bottles and cans, boxes and newspapers.

So far, though, we haven’t been up to the task. Only 16 percent of the 2.1 billion tonnes of solid waste that the world produces every year gets recycled. The U.S. Environmental Protection Agency estimates that the United States recycled only about 32 percent of its garbage in 2018, putting the country in the middle of the pack worldwide. Germany, on the high end, captures about 65 percent, while Chile and Turkey barely do anything, recycling a mere 1 percent of their trash, according to a 2015 report by the Organization for Economic Cooperation and Development (OECD).

Here in the United States, of the 32 percent of the trash that we try to recycle, about 80 to 95 percent actually gets recycled, as Jason Calaiaro of AMP Robotics points out in “AI Takes a Dumpster Dive.” The technology that Calaiaro’s company is developing could move us closer to 100 percent. But it would have no effect on the two-thirds of the waste stream that never makes it to recyclers.

Certainly, the marginal gains realized by AI and robotics will help the bottom lines of recycling companies, making it profitable for them to recover more useful materials from waste. But to make a bigger difference, we need to address the problem at the beginning of the process: Manufacturers and packaging companies must shift to more sustainable designs that use less material or more recyclable ones.

According to the Joint Research Centre of the European Commission, more than “80 percent of all product-related environmental impacts are determined during the design phase of a product.” One company that applies AI at the start of the design process is Digimind GmbH based in Berlin. As CEO Katharina Eissing told Packaging Europe last year, Digimind’s AI-aided platform lets package designers quickly assess the outcome of changes they make to designs. In one case, Digimind reduced the weight of a company’s 1.5-liter plastic bottles by 13.7 percent, a seemingly small improvement that becomes more impressive when you consider that the company produces 1 billion of these bottles every year.

That’s still just a drop in the polyethylene terephthalate bucket: The world produced an estimated 583 billion PET bottles last year, according to Statista. To truly address our global garbage problem, our consumption patterns must change–canteens instead of single-use plastic bottles, compostable paper boxes instead of plastic clamshell containers, reusable shopping bags instead of “disposable” plastic ones. And engineers involved in product design need to develop packaging free of PET, polystyrene, and polycarbonate, which break down into tiny particles called microplastics that researchers are now finding in human blood and feces.

As much as we may hope that AI can solve our problems for us, that’s wishful thinking. Human ingenuity got us into this mess and humans will have to regulate, legislate, and otherwise incentivize the private sector to get us out of it.



Garbage is a global problem that each of us contributes to. Since the 1970s, we've all been told we can help fix that problem by assiduously recycling bottles and cans, boxes and newspapers.

So far, though, we haven’t been up to the task. Only 16 percent of the 2.1 billion tonnes of solid waste that the world produces every year gets recycled. The U.S. Environmental Protection Agency estimates that the United States recycled only about 32 percent of its garbage in 2018, putting the country in the middle of the pack worldwide. Germany, on the high end, captures about 65 percent, while Chile and Turkey barely do anything, recycling a mere 1 percent of their trash, according to a 2015 report by the Organization for Economic Cooperation and Development (OECD).

Here in the United States, of the 32 percent of the trash that we try to recycle, about 80 to 95 percent actually gets recycled, as Jason Calaiaro of AMP Robotics points out in “AI Takes a Dumpster Dive.” The technology that Calaiaro’s company is developing could move us closer to 100 percent. But it would have no effect on the two-thirds of the waste stream that never makes it to recyclers.

Certainly, the marginal gains realized by AI and robotics will help the bottom lines of recycling companies, making it profitable for them to recover more useful materials from waste. But to make a bigger difference, we need to address the problem at the beginning of the process: Manufacturers and packaging companies must shift to more sustainable designs that use less material or more recyclable ones.

According to the Joint Research Centre of the European Commission, more than “80 percent of all product-related environmental impacts are determined during the design phase of a product.” One company that applies AI at the start of the design process is Digimind GmbH based in Berlin. As CEO Katharina Eissing told Packaging Europe last year, Digimind’s AI-aided platform lets package designers quickly assess the outcome of changes they make to designs. In one case, Digimind reduced the weight of a company’s 1.5-liter plastic bottles by 13.7 percent, a seemingly small improvement that becomes more impressive when you consider that the company produces 1 billion of these bottles every year.

That’s still just a drop in the polyethylene terephthalate bucket: The world produced an estimated 583 billion PET bottles last year, according to Statista. To truly address our global garbage problem, our consumption patterns must change–canteens instead of single-use plastic bottles, compostable paper boxes instead of plastic clamshell containers, reusable shopping bags instead of “disposable” plastic ones. And engineers involved in product design need to develop packaging free of PET, polystyrene, and polycarbonate, which break down into tiny particles called microplastics that researchers are now finding in human blood and feces.

As much as we may hope that AI can solve our problems for us, that’s wishful thinking. Human ingenuity got us into this mess and humans will have to regulate, legislate, and otherwise incentivize the private sector to get us out of it.



It’s Tuesday night. In front of your house sits a large blue bin, full of newspaper, cardboard, bottles, cans, foil take-out trays, and empty yogurt containers. You may feel virtuous, thinking you’re doing your part to reduce waste. But after you rinse out that yogurt container and toss it into the bin, you probably don’t think much about it ever again.

The truth about recycling in many parts of the United States and much of Europe is sobering. Tomorrow morning, the contents of the recycling bin will be dumped into a truck and taken to the recycling facility to be sorted. Most of the material will head off for processing and eventual use in new products. But a lot of it will end up in a landfill.

So how much of the material that goes into the typical bin avoids a trip to landfill? For countries that do curbside recycling, the number—called the recovery rate—appears to average around 70 to 90 percent, though widespread data isn’t available. That doesn’t seem bad. But in some municipalities, it can go as low as 40 percent.

What’s worse, only a small quantity of all recyclables makes it into the bins—just 32 percent in the United States and 10 to 15 percent globally. That’s a lot of material made from finite resources that needlessly goes to waste.

We have to do better than that. Right now, the recycling industry is facing a financial crisis, thanks to falling prices for sorted recyclables as well as policy, enacted by China in 2018, which restricts the import of many materials destined for recycling and shuts out most recyclables originating in the United States.

There is a way to do better. Using computer vision, machine learning, and robots to identify and sort recycled material, we can improve the accuracy of automatic sorting machines, reduce the need for human intervention, and boost overall recovery rates.

My company, Amp Robotics, based in Louisville, Colo., is developing hardware and software that relies on image analysis to sort recyclables with far higher accuracy and recovery rates than are typical for conventional systems. Other companies are similarly working to apply AI and robotics to recycling, including Bulk Handling Systems, Machinex, and Tomra. To date, the technology has been installed in hundreds of sorting facilities around the world. Expanding its use will prevent waste and help the environment by keeping recyclables out of landfills and making them easier to reprocess and reuse.

AMP Robotics

Before I explain how AI will improve recycling, let’s look at how recycled materials were sorted in the past and how they’re being sorted in most parts of the world today.

When recycling began in the 1960s, the task of sorting fell to the consumer—newspapers in one bundle, cardboard in another, and glass and cans in their own separate bins. That turned out to be too much of a hassle for many people and limited the amount of recyclable materials gathered.

In the 1970s, many cities took away the multiple bins and replaced them with a single container, with sorting happening downstream. This “single stream” recycling boosted participation, and it is now the dominant form of recycling in developed countries.

Moving the task of sorting further downstream led to the building of sorting facilities. To do the actual sorting, recycling entrepreneurs adapted equipment from the mining and agriculture industries, filling in with human labor as necessary. These sorting systems had no computer intelligence, relying instead on the physical properties of materials to separate them. Glass, for example, can be broken into tiny pieces and then sifted and collected. Cardboard is rigid and light—it can glide over a series of mechanical camlike disks, while other, denser materials fall in between the disks. Ferrous metals can be magnetically separated from other materials; magnetism can also be induced in nonferrous items, like aluminum, using a large eddy current.

By the 1990s, hyperspectral imaging, developed by NASA and first launched in a satellite in 1972, was becoming commercially viable and began to show up in the recycling world. Unlike human eyes, which mostly see in combinations of red, green, and blue, hyperspectral sensors divide images into many more spectral bands. The technology’s ability to distinguish between different types of plastics changed the game for recyclers, bringing not only optical sensing but computer intelligence into the process. Programmable optical sorters were also developed to separate paper products, distinguishing, say, newspaper from junk mail.

So today, much of the sorting is automated. These systems generally sort to 80 to 95 percent purity—that is, 5 to 20 percent of the output shouldn’t be there. For the output to be profitable, however, the purity must be higher than 95 percent; below this threshold, the value drops, and often it’s worth nothing. So humans manually clean up each of the streams, picking out stray objects before the material is compressed and baled for shipping.

Despite all the automated and manual sorting, about 10 to 30 percent of the material that enters the facility ultimately ends up in a landfill. In most cases, more than half of that material is recyclable and worth money but was simply missed.

We’ve pushed the current systems as far as they can go. Only AI can do better.

Getting AI into the recycling business means combining pick-and-place robots with accurate real-time object detection. Pick-and-place robots combined with computer vision systems are used in manufacturing to grab particular objects, but they generally are just looking repeatedly for a single item, or for a few items of known shapes and under controlled lighting conditions. Recycling, though, involves infinite variability in the kinds, shapes, and orientations of the objects traveling down the conveyor belt, requiring nearly instantaneous identification along with the quick dispatch of a new trajectory to the robot arm.

AI-based systems guide robotic arms to grab materials from a stream of mixed recyclables and place them in the correct bins. Here, a tandem robot system operates at a Waste Connections recycling facility [top], and a single robot arm [bottom] recovers a piece of corrugated cardboard. The United States does a pretty good job when it comes to cardboard: In 2021, 91.4 percent of discarded cardboard was recycled, according to the American Forest and Paper Association. AMP Robotics

My company first began using AI in 2016 to extract empty cartons from other recyclables at a facility in Colorado; today, we have systems installed in more than 25 U.S. states and six countries. We weren’t the first company to try AI sorting, but it hadn’t previously been used commercially. And we have steadily expanded the types of recyclables our systems can recognize and sort.

AI makes it theoretically possible to recover all of the recyclables from a mixed-material stream at accuracy approaching 100 percent, entirely based on image analysis. If an AI-based sorting system can see an object, it can accurately sort it.

Consider a particularly challenging material for today’s recycling sorters: high-density polyethylene (HDPE), a plastic commonly used for detergent bottles and milk jugs. (In the United States, Europe, and China, HDPE products are labeled as No. 2 recyclables.) In a system that relies on hyperspectral imaging, batches of HDPE tend to be mixed with other plastics and may have paper or plastic labels, making it difficult for the hyperspectral imagers to detect the underlying object’s chemical composition.

An AI-driven computer-vision system, by contrast, can determine that a bottle is HDPE and not something else by recognizing its packaging. Such a system can also use attributes like color, opacity, and form factor to increase detection accuracy, and even sort by color or specific product, reducing the amount of reprocessing needed. Though the system doesn’t attempt to understand the meaning of words on labels, the words are part of an item’s visual attributes.

We at AMP Robotics have built systems that can do this kind of sorting. In the future, AI systems could also sort by combinations of material and by original use, enabling food-grade materials to be separated from containers that held household cleaners, and paper contaminated with food waste to be separated from clean paper.

Training a neural network to detect objects in the recycling stream is not easy. It is at least several orders of magnitude more challenging than recognizing faces in a photograph, because there can be a nearly infinite variety of ways that recyclable materials can be deformed, and the system has to recognize the permutations.

Inside the Sorting Center

Today’s recycling facilities use mechanical sorting, optical hyperspectral sorting, and human workers. Here’s what typically happens after the recycling truck leaves your house with the contents of your blue bin.

Trucks unload on a concrete pad, called the tip floor. A front-end loader scoops up material in bulk and dumps it onto a conveyor belt, typically at a rate of 30 to 60 tonnes per hour.

The first stage is the presort. Human workers remove large or problematic items that shouldn’t have made it onto collection trucks in the first place—bicycles, big pieces of plastic film, propane canisters, car transmissions.


Sorting machines that rely on optical hyperspectral imaging or human workers separate fiber (office paper, cardboard, magazines—referred to as 2D products, as they are mostly flat) from the remaining plastics and metals. In the case of the optical sorters, cameras stare down at the material rolling down the conveyor belt, detect an object made of the target substance, and then send a message to activate a bank of electronically controllable solenoids to divert the object into a collection bin.


The nonfiber materials pass through a mechanical system with densely packed camlike wheels. Large items glide past while small items, like that recyclable fork you thoughtfully deposited in your blue bin, slip through, headed straight for landfill—they are just too small to be sorted. Machines also smash glass, which falls to the bottom and is screened out.


The rest of the stream then passes under overhead magnets, which collect items made of ferrous metals, and an eddy-current-inducing machine, which jolts nonferrous metals to another collection area.


At this point, mostly plastics remain. More hyperspectral sorters, in series, can pull off plastics one type—like the HDPE of detergent bottles and the PET of water bottles—at a time.

Finally, whatever is left—between 10 to 30 percent of what came in on the trucks—goes to landfill.


In the future, AI-driven robotic sorting systems and AI inspection systems could replace human workers at most points in this process. In the diagram, red icons indicate where AI-driven robotic systems could replace human workers and a blue icon indicates where an AI auditing system could make a final check on the success of the sorting effort.

It’s hard enough to train a neural network to identify all the different types of bottles of laundry detergent on the market today, but it’s an entirely different challenge when you consider the physical deformations that these objects can undergo by the time they reach a recycling facility. They can be folded, torn, or smashed. Mixed into a stream of other objects, a bottle might have only a corner visible. Fluids or food waste might obscure the material.

We train our systems by giving them images of materials belonging to each category, sourced from recycling facilities around the world. My company now has the world’s largest data set of recyclable material images for use in machine learning.

Using this data, our models learn to identify recyclables in the same way their human counterparts do, by spotting patterns and features that distinguish different materials. We continuously collect random samples from all the facilities that use our systems, and then annotate them, add them to our database, and retrain our neural networks. We also test our networks to find models that perform best on target material and do targeted additional training on materials that our systems have trouble identifying correctly.

In general, neural networks are susceptible to learning the wrong thing. Pictures of cows are associated with milk packaging, which is commonly produced as a fiber carton or HDPE container. But milk products can also be packaged in other plastics; for example, single-serving milk bottles may look like the HDPE of gallon jugs but are usually made from an opaque form of the PET (polyethylene terephthalate) used for water bottles. Cows don’t always mean fiber or HDPE, in other words.

There is also the challenge of staying up to date with the continual changes in consumer packaging. Any mechanism that relies on visual observation to learn associations between packaging and material types will need to consume a steady stream of data to ensure that objects are classified accurately.

But we can get these systems to work. Right now, our systems do really well on certain categories—more than 98 percent accuracy on aluminum cans—and are getting better at distinguishing nuances like color, opacity, and initial use (spotting those food-grade plastics).

Now that AI-based systems are ready to take on your recyclables, how might things change? Certainly, they will boost the use of robotics, which is only minimally used in the recycling industry today. Given the perpetual worker shortage in this dull and dirty business, automation is a path worth taking.

AI can also help us understand how well today’s existing sorting processes are doing and how we can improve them. Today, we have a very crude understanding of the operational efficiency of sorting facilities—we weigh trucks on the way in and weigh the output on the way out. No facility can tell you the purity of the products with any certainty; they only audit quality periodically by breaking open random bales. But if you placed an AI-powered vision system over the inputs and outputs of relevant parts of the sorting process, you’d gain a holistic view of what material is flowing where. This level of scrutiny is just beginning in hundreds of facilities around the world, and it should lead to greater efficiency in recycling operations. Being able to digitize the real-time flow of recyclables with precision and consistency also provides opportunities to better understand which recyclable materials are and are not currently being recycled and then to identify gaps that will allow facilities to improve their recycling systems overall.

Sorting Robot Picking Mixed Plastics AMP Robotics

But to really unleash the power of AI on the recycling process, we need to rethink the entire sorting process. Today, recycling operations typically whittle down the mixed stream of materials to the target material by removing nontarget material—they do a “negative sort,” in other words. Instead, using AI vision systems with robotic pickers, we can perform a “positive sort.” Instead of removing nontarget material, we identify each object in a stream and select the target material.

To be sure, our recovery rate and purity are only as good as our algorithms. Those numbers continue to improve as our systems gain more experience in the world and our training data set continues to grow. We expect to eventually hit purity and recovery rates of 100 percent.

The implications of moving from more mechanical systems to AI are profound. Rather than coarsely sorting to 80 percent purity and then manually cleaning up the stream to 95 percent purity, a facility can reach the target purity on the first pass. And instead of having a unique sorting mechanism handling each type of material, a sorting machine can change targets just by a switch in algorithm.

The use of AI also means that we can recover materials long ignored for economic reasons. Until now, it was only economically viable for facilities to pursue the most abundant, high-value items in the waste stream. But with machine-learning systems that do positive sorting on a wider variety of materials, we can start to capture a greater diversity of material at little or no overhead to the business. That’s good for the planet.

We are beginning to see a few AI-based secondary recycling facilities go into operation, with Amp’s technology first coming online in Denver in late 2020. These systems are currently used where material has already passed through a traditional sort, seeking high-value materials missed or low-value materials that can be sorted in novel ways and therefore find new markets.

Thanks to AI, the industry is beginning to chip away at the mountain of recyclables that end up in landfills each year—a mountain containing billions of tons of recyclables representing billions of dollars lost and nonrenewable resources wasted.

This article appears in the July 2022 print issue as “AI Takes a Dumpster Dive .”



It’s Tuesday night. In front of your house sits a large blue bin, full of newspaper, cardboard, bottles, cans, foil take-out trays, and empty yogurt containers. You may feel virtuous, thinking you’re doing your part to reduce waste. But after you rinse out that yogurt container and toss it into the bin, you probably don’t think much about it ever again.

The truth about recycling in many parts of the United States and much of Europe is sobering. Tomorrow morning, the contents of the recycling bin will be dumped into a truck and taken to the recycling facility to be sorted. Most of the material will head off for processing and eventual use in new products. But a lot of it will end up in a landfill.

So how much of the material that goes into the typical bin avoids a trip to landfill? For countries that do curbside recycling, the number—called the recovery rate—appears to average around 70 to 90 percent, though widespread data isn’t available. That doesn’t seem bad. But in some municipalities, it can go as low as 40 percent.

What’s worse, only a small quantity of all recyclables makes it into the bins—just 32 percent in the United States and 10 to 15 percent globally. That’s a lot of material made from finite resources that needlessly goes to waste.

We have to do better than that. Right now, the recycling industry is facing a financial crisis, thanks to falling prices for sorted recyclables as well as policy, enacted by China in 2018, which restricts the import of many materials destined for recycling and shuts out most recyclables originating in the United States.

There is a way to do better. Using computer vision, machine learning, and robots to identify and sort recycled material, we can improve the accuracy of automatic sorting machines, reduce the need for human intervention, and boost overall recovery rates.

My company, Amp Robotics, based in Louisville, Colo., is developing hardware and software that relies on image analysis to sort recyclables with far higher accuracy and recovery rates than are typical for conventional systems. Other companies are similarly working to apply AI and robotics to recycling, including Bulk Handling Systems, Machinex, and Tomra. To date, the technology has been installed in hundreds of sorting facilities around the world. Expanding its use will prevent waste and help the environment by keeping recyclables out of landfills and making them easier to reprocess and reuse.

AMP Robotics

Before I explain how AI will improve recycling, let’s look at how recycled materials were sorted in the past and how they’re being sorted in most parts of the world today.

When recycling began in the 1960s, the task of sorting fell to the consumer—newspapers in one bundle, cardboard in another, and glass and cans in their own separate bins. That turned out to be too much of a hassle for many people and limited the amount of recyclable materials gathered.

In the 1970s, many cities took away the multiple bins and replaced them with a single container, with sorting happening downstream. This “single stream” recycling boosted participation, and it is now the dominant form of recycling in developed countries.

Moving the task of sorting further downstream led to the building of sorting facilities. To do the actual sorting, recycling entrepreneurs adapted equipment from the mining and agriculture industries, filling in with human labor as necessary. These sorting systems had no computer intelligence, relying instead on the physical properties of materials to separate them. Glass, for example, can be broken into tiny pieces and then sifted and collected. Cardboard is rigid and light—it can glide over a series of mechanical camlike disks, while other, denser materials fall in between the disks. Ferrous metals can be magnetically separated from other materials; magnetism can also be induced in nonferrous items, like aluminum, using a large eddy current.

By the 1990s, hyperspectral imaging, developed by NASA and first launched in a satellite in 1972, was becoming commercially viable and began to show up in the recycling world. Unlike human eyes, which mostly see in combinations of red, green, and blue, hyperspectral sensors divide images into many more spectral bands. The technology’s ability to distinguish between different types of plastics changed the game for recyclers, bringing not only optical sensing but computer intelligence into the process. Programmable optical sorters were also developed to separate paper products, distinguishing, say, newspaper from junk mail.

So today, much of the sorting is automated. These systems generally sort to 80 to 95 percent purity—that is, 5 to 20 percent of the output shouldn’t be there. For the output to be profitable, however, the purity must be higher than 95 percent; below this threshold, the value drops, and often it’s worth nothing. So humans manually clean up each of the streams, picking out stray objects before the material is compressed and baled for shipping.

Despite all the automated and manual sorting, about 10 to 30 percent of the material that enters the facility ultimately ends up in a landfill. In most cases, more than half of that material is recyclable and worth money but was simply missed.

We’ve pushed the current systems as far as they can go. Only AI can do better.

Getting AI into the recycling business means combining pick-and-place robots with accurate real-time object detection. Pick-and-place robots combined with computer vision systems are used in manufacturing to grab particular objects, but they generally are just looking repeatedly for a single item, or for a few items of known shapes and under controlled lighting conditions. Recycling, though, involves infinite variability in the kinds, shapes, and orientations of the objects traveling down the conveyor belt, requiring nearly instantaneous identification along with the quick dispatch of a new trajectory to the robot arm.

AI-based systems guide robotic arms to grab materials from a stream of mixed recyclables and place them in the correct bins. Here, a tandem robot system operates at a Waste Connections recycling facility [top], and a single robot arm [bottom] recovers a piece of corrugated cardboard. The United States does a pretty good job when it comes to cardboard: In 2021, 91.4 percent of discarded cardboard was recycled, according to the American Forest and Paper Association. AMP Robotics

My company first began using AI in 2016 to extract empty cartons from other recyclables at a facility in Colorado; today, we have systems installed in more than 25 U.S. states and six countries. We weren’t the first company to try AI sorting, but it hadn’t previously been used commercially. And we have steadily expanded the types of recyclables our systems can recognize and sort.

AI makes it theoretically possible to recover all of the recyclables from a mixed-material stream at accuracy approaching 100 percent, entirely based on image analysis. If an AI-based sorting system can see an object, it can accurately sort it.

Consider a particularly challenging material for today’s recycling sorters: high-density polyethylene (HDPE), a plastic commonly used for detergent bottles and milk jugs. (In the United States, Europe, and China, HDPE products are labeled as No. 2 recyclables.) In a system that relies on hyperspectral imaging, batches of HDPE tend to be mixed with other plastics and may have paper or plastic labels, making it difficult for the hyperspectral imagers to detect the underlying object’s chemical composition.

An AI-driven computer-vision system, by contrast, can determine that a bottle is HDPE and not something else by recognizing its packaging. Such a system can also use attributes like color, opacity, and form factor to increase detection accuracy, and even sort by color or specific product, reducing the amount of reprocessing needed. Though the system doesn’t attempt to understand the meaning of words on labels, the words are part of an item’s visual attributes.

We at AMP Robotics have built systems that can do this kind of sorting. In the future, AI systems could also sort by combinations of material and by original use, enabling food-grade materials to be separated from containers that held household cleaners, and paper contaminated with food waste to be separated from clean paper.

Training a neural network to detect objects in the recycling stream is not easy. It is at least several orders of magnitude more challenging than recognizing faces in a photograph, because there can be a nearly infinite variety of ways that recyclable materials can be deformed, and the system has to recognize the permutations.

Inside the Sorting Center

Today’s recycling facilities use mechanical sorting, optical hyperspectral sorting, and human workers. Here’s what typically happens after the recycling truck leaves your house with the contents of your blue bin.

Trucks unload on a concrete pad, called the tip floor. A front-end loader scoops up material in bulk and dumps it onto a conveyor belt, typically at a rate of 30 to 60 tonnes per hour.

The first stage is the presort. Human workers remove large or problematic items that shouldn’t have made it onto collection trucks in the first place—bicycles, big pieces of plastic film, propane canisters, car transmissions.


Sorting machines that rely on optical hyperspectral imaging or human workers separate fiber (office paper, cardboard, magazines—referred to as 2D products, as they are mostly flat) from the remaining plastics and metals. In the case of the optical sorters, cameras stare down at the material rolling down the conveyor belt, detect an object made of the target substance, and then send a message to activate a bank of electronically controllable solenoids to divert the object into a collection bin.


The nonfiber materials pass through a mechanical system with densely packed camlike wheels. Large items glide past while small items, like that recyclable fork you thoughtfully deposited in your blue bin, slip through, headed straight for landfill—they are just too small to be sorted. Machines also smash glass, which falls to the bottom and is screened out.


The rest of the stream then passes under overhead magnets, which collect items made of ferrous metals, and an eddy-current-inducing machine, which jolts nonferrous metals to another collection area.


At this point, mostly plastics remain. More hyperspectral sorters, in series, can pull off plastics one type—like the HDPE of detergent bottles and the PET of water bottles—at a time.

Finally, whatever is left—between 10 to 30 percent of what came in on the trucks—goes to landfill.


In the future, AI-driven robotic sorting systems and AI inspection systems could replace human workers at most points in this process. In the diagram, red icons indicate where AI-driven robotic systems could replace human workers and a blue icon indicates where an AI auditing system could make a final check on the success of the sorting effort.

It’s hard enough to train a neural network to identify all the different types of bottles of laundry detergent on the market today, but it’s an entirely different challenge when you consider the physical deformations that these objects can undergo by the time they reach a recycling facility. They can be folded, torn, or smashed. Mixed into a stream of other objects, a bottle might have only a corner visible. Fluids or food waste might obscure the material.

We train our systems by giving them images of materials belonging to each category, sourced from recycling facilities around the world. My company now has the world’s largest data set of recyclable material images for use in machine learning.

Using this data, our models learn to identify recyclables in the same way their human counterparts do, by spotting patterns and features that distinguish different materials. We continuously collect random samples from all the facilities that use our systems, and then annotate them, add them to our database, and retrain our neural networks. We also test our networks to find models that perform best on target material and do targeted additional training on materials that our systems have trouble identifying correctly.

In general, neural networks are susceptible to learning the wrong thing. Pictures of cows are associated with milk packaging, which is commonly produced as a fiber carton or HDPE container. But milk products can also be packaged in other plastics; for example, single-serving milk bottles may look like the HDPE of gallon jugs but are usually made from an opaque form of the PET (polyethylene terephthalate) used for water bottles. Cows don’t always mean fiber or HDPE, in other words.

There is also the challenge of staying up to date with the continual changes in consumer packaging. Any mechanism that relies on visual observation to learn associations between packaging and material types will need to consume a steady stream of data to ensure that objects are classified accurately.

But we can get these systems to work. Right now, our systems do really well on certain categories—more than 98 percent accuracy on aluminum cans—and are getting better at distinguishing nuances like color, opacity, and initial use (spotting those food-grade plastics).

Now that AI-based systems are ready to take on your recyclables, how might things change? Certainly, they will boost the use of robotics, which is only minimally used in the recycling industry today. Given the perpetual worker shortage in this dull and dirty business, automation is a path worth taking.

AI can also help us understand how well today’s existing sorting processes are doing and how we can improve them. Today, we have a very crude understanding of the operational efficiency of sorting facilities—we weigh trucks on the way in and weigh the output on the way out. No facility can tell you the purity of the products with any certainty; they only audit quality periodically by breaking open random bales. But if you placed an AI-powered vision system over the inputs and outputs of relevant parts of the sorting process, you’d gain a holistic view of what material is flowing where. This level of scrutiny is just beginning in hundreds of facilities around the world, and it should lead to greater efficiency in recycling operations. Being able to digitize the real-time flow of recyclables with precision and consistency also provides opportunities to better understand which recyclable materials are and are not currently being recycled and then to identify gaps that will allow facilities to improve their recycling systems overall.

Sorting Robot Picking Mixed Plastics AMP Robotics

But to really unleash the power of AI on the recycling process, we need to rethink the entire sorting process. Today, recycling operations typically whittle down the mixed stream of materials to the target material by removing nontarget material—they do a “negative sort,” in other words. Instead, using AI vision systems with robotic pickers, we can perform a “positive sort.” Instead of removing nontarget material, we identify each object in a stream and select the target material.

To be sure, our recovery rate and purity are only as good as our algorithms. Those numbers continue to improve as our systems gain more experience in the world and our training data set continues to grow. We expect to eventually hit purity and recovery rates of 100 percent.

The implications of moving from more mechanical systems to AI are profound. Rather than coarsely sorting to 80 percent purity and then manually cleaning up the stream to 95 percent purity, a facility can reach the target purity on the first pass. And instead of having a unique sorting mechanism handling each type of material, a sorting machine can change targets just by a switch in algorithm.

The use of AI also means that we can recover materials long ignored for economic reasons. Until now, it was only economically viable for facilities to pursue the most abundant, high-value items in the waste stream. But with machine-learning systems that do positive sorting on a wider variety of materials, we can start to capture a greater diversity of material at little or no overhead to the business. That’s good for the planet.

We are beginning to see a few AI-based secondary recycling facilities go into operation, with Amp’s technology first coming online in Denver in late 2020. These systems are currently used where material has already passed through a traditional sort, seeking high-value materials missed or low-value materials that can be sorted in novel ways and therefore find new markets.

Thanks to AI, the industry is beginning to chip away at the mountain of recyclables that end up in landfills each year—a mountain containing billions of tons of recyclables representing billions of dollars lost and nonrenewable resources wasted.

This article appears in the July 2022 print issue as “AI Takes a Dumpster Dive .”



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

ERF 2022: 28–30 June 2022, ROTTERDAM, NETHERLANDSRoboCup 2022: 11–17 July 2022, BANGKOKIEEE CASE 2022: 20–24 August 2022, MEXICO CITYCLAWAR 2022: 12–14 September 2022, AZORES, PORTUGALANA Avatar XPRIZE Finals: 4–5 November 2022, LOS ANGELESCoRL 2022: 14–18 December 2022, AUCKLAND, NEW ZEALAND

Enjoy today’s videos!

The Real Robotics Lab at University of Leeds presents two Chef quadruped robots remotely controlled by a single operator to make a tasty burger as a team. The operator uses a gamepad to control their walking and a wearable motion capture system for manipulation control of the robotic arms mounted on the legged robots.

We’re told that these particular quadrupeds are vegans, and that the vegan burgers they make are “very delicious.”

[ Real Robotics ]

Thanks Chengxu!

Elasto-plastic materials like Play-Doh can be difficult for robots to manipulate. RoboCraft is a system that enables a robot to learn how to shape these materials in just ten minutes.

[ MIT ]

Thanks, Rachel!

State-of-the-art frame interpolation methods generate intermediate frames by inferring object motions in the image from consecutive key-frames. In the absence of additional information, first-order approximations, i.e. optical flow, must be used, but this choice restricts the types of motions that can be modeled, leading to errors in highly dynamic scenarios. Event cameras are novel sensors that address this limitation by providing auxiliary visual information in the blind-time between frames.

[ ETH Zurich ]

Loopy is a robotic swarm of one Degree-of-Freedom (DOF) agents (i.e., a closed-loop made of 36 Dynamixel servos). Each agent (servo) makes its own local decisions based on interactions with its two neighbors. In this video, Loopy is trying to go from an arbitrary initial shape to a goal shape (Flying WV).

[ WVU ]

A collaboration between Georgia Tech Robotic Musicianship Group and Avshalom Pollak Dance Theatre. The robotic arms respond to the dancers’ movement and to the music. Our goal is for both humans and robots to be surprised and inspired by each other. If successful, both humans and robots will be dancing differently than they did before they met.

[ Georgia Tech ]

Thanks, Gil!

Lingkang Zhang wrote in to share a bipedal robot he’s working on. It’s 70 centimeters tall, runs ROS, can balance and walk, and costs just US $200!

[ YouTube ]

Thanks, Lingkang!

The private-public partnership with NASA and Redwire will demonstrate the ability of a small spacecraft—OSAM-2 (On-Orbit Servicing, Manufacturing and Assembly)—to manufacture and assemble spacecraft components in low-Earth orbit.

[ NASA ]

Inspired by fireflies, researchers create insect-scale robots that can emit light when they fly, which enables motion tracking and communication.

The ability to emit light also brings these microscale robots, which weigh barely more than a paper clip, one step closer to flying on their own outside the lab. These robots are so lightweight that they can’t carry sensors, so researchers must track them using bulky infrared cameras that don’t work well outdoors. Now, they’ve shown that they can track the robots precisely using the light they emit and just three smartphone cameras.

[ MIT ]

Unboxing and getting started with a TurtleBot 4 robotics learning platform with Maddy Thomson, Robotics Demo Designer from Clearpath Robotics.

[ Clearpath ]

We present a new gripper and exploration approach that uses a finger with very low reflected inertia for probing and then grasping objects. The finger employs a transparent transmission, resulting in a light touch when contact occurs. Experiments show that the finger can safely move faster into contacts than industrial parallel jaw grippers or even most force-controlled grippers with backdrivable transmissions. This property allows rapid proprioceptive probing of objects.

[ Stanford BDML ]

This is very, very water resistant. I’m impressed.

[ Unitree ]

I have no idea why Pepper is necessary here, but I do love that this ice cream shop is named Quokka.

[ Quokka ]

Researchers at ETH Zurich have developed a wearable textile exomuscle that serves as an extra layer of muscles. They aim to use it to increase the upper body strength and endurance of people with restricted mobility.

[ ETH Zurich ]

VISTA is a data-driven, photorealistic simulator for autonomous driving. It can simulate not just live video but LiDAR data and event cameras, and also incorporate other simulated vehicles to model complex driving situations.

[ MIT CSAIL ]

In the second phase of the ANT project, the hexapod CREX and the quadruped Aliengo are traversing rough terrain to show their terrain adaption capabilities.

[ DFKI ]

Here are some satisfying food-service robot videos from FOOMA, a trade show in Japan.


ロビット CUTR レタスの芯抜き #FOOMAJAPAN2022 www.youtube.com


デンソーウェーブ 不定形&柔軟物取り扱い #FOOMAJAPAN2022 www.youtube.com


アールティ Fondly 弁当盛付 #FOOMAJAPAN2022 www.youtube.com

[ Kazumichi Moriyama ]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

ERF 2022: 28–30 June 2022, ROTTERDAM, NETHERLANDSRoboCup 2022: 11–17 July 2022, BANGKOKIEEE CASE 2022: 20–24 August 2022, MEXICO CITYCLAWAR 2022: 12–14 September 2022, AZORES, PORTUGALANA Avatar XPRIZE Finals: 4–5 November 2022, LOS ANGELESCoRL 2022: 14–18 December 2022, AUCKLAND, NEW ZEALAND

Enjoy today’s videos!

The Real Robotics Lab at University of Leeds presents two Chef quadruped robots remotely controlled by a single operator to make a tasty burger as a team. The operator uses a gamepad to control their walking and a wearable motion capture system for manipulation control of the robotic arms mounted on the legged robots.

We’re told that these particular quadrupeds are vegans, and that the vegan burgers they make are “very delicious.”

[ Real Robotics ]

Thanks Chengxu!

Elasto-plastic materials like Play-Doh can be difficult for robots to manipulate. RoboCraft is a system that enables a robot to learn how to shape these materials in just ten minutes.

[ MIT ]

Thanks, Rachel!

State-of-the-art frame interpolation methods generate intermediate frames by inferring object motions in the image from consecutive key-frames. In the absence of additional information, first-order approximations, i.e. optical flow, must be used, but this choice restricts the types of motions that can be modeled, leading to errors in highly dynamic scenarios. Event cameras are novel sensors that address this limitation by providing auxiliary visual information in the blind-time between frames.

[ ETH Zurich ]

Loopy is a robotic swarm of one Degree-of-Freedom (DOF) agents (i.e., a closed-loop made of 36 Dynamixel servos). Each agent (servo) makes its own local decisions based on interactions with its two neighbors. In this video, Loopy is trying to go from an arbitrary initial shape to a goal shape (Flying WV).

[ WVU ]

A collaboration between Georgia Tech Robotic Musicianship Group and Avshalom Pollak Dance Theatre. The robotic arms respond to the dancers’ movement and to the music. Our goal is for both humans and robots to be surprised and inspired by each other. If successful, both humans and robots will be dancing differently than they did before they met.

[ Georgia Tech ]

Thanks, Gil!

Lingkang Zhang wrote in to share a bipedal robot he’s working on. It’s 70 centimeters tall, runs ROS, can balance and walk, and costs just US $200!

[ YouTube ]

Thanks, Lingkang!

The private-public partnership with NASA and Redwire will demonstrate the ability of a small spacecraft—OSAM-2 (On-Orbit Servicing, Manufacturing and Assembly)—to manufacture and assemble spacecraft components in low-Earth orbit.

[ NASA ]

Inspired by fireflies, researchers create insect-scale robots that can emit light when they fly, which enables motion tracking and communication.

The ability to emit light also brings these microscale robots, which weigh barely more than a paper clip, one step closer to flying on their own outside the lab. These robots are so lightweight that they can’t carry sensors, so researchers must track them using bulky infrared cameras that don’t work well outdoors. Now, they’ve shown that they can track the robots precisely using the light they emit and just three smartphone cameras.

[ MIT ]

Unboxing and getting started with a TurtleBot 4 robotics learning platform with Maddy Thomson, Robotics Demo Designer from Clearpath Robotics.

[ Clearpath ]

We present a new gripper and exploration approach that uses a finger with very low reflected inertia for probing and then grasping objects. The finger employs a transparent transmission, resulting in a light touch when contact occurs. Experiments show that the finger can safely move faster into contacts than industrial parallel jaw grippers or even most force-controlled grippers with backdrivable transmissions. This property allows rapid proprioceptive probing of objects.

[ Stanford BDML ]

This is very, very water resistant. I’m impressed.

[ Unitree ]

I have no idea why Pepper is necessary here, but I do love that this ice cream shop is named Quokka.

[ Quokka ]

Researchers at ETH Zurich have developed a wearable textile exomuscle that serves as an extra layer of muscles. They aim to use it to increase the upper body strength and endurance of people with restricted mobility.

[ ETH Zurich ]

VISTA is a data-driven, photorealistic simulator for autonomous driving. It can simulate not just live video but LiDAR data and event cameras, and also incorporate other simulated vehicles to model complex driving situations.

[ MIT CSAIL ]

In the second phase of the ANT project, the hexapod CREX and the quadruped Aliengo are traversing rough terrain to show their terrain adaption capabilities.

[ DFKI ]

Here are some satisfying food-service robot videos from FOOMA, a trade show in Japan.


ロビット CUTR レタスの芯抜き #FOOMAJAPAN2022 www.youtube.com


デンソーウェーブ 不定形&柔軟物取り扱い #FOOMAJAPAN2022 www.youtube.com


アールティ Fondly 弁当盛付 #FOOMAJAPAN2022 www.youtube.com

[ Kazumichi Moriyama ]

A cyber-physical system (CPS) is a system with integrated computational and physical abilities. Deriving the notion of cyber-physical collective (CPC) from a social view of CPS, we consider the nodes of a CPS as individuals (agents) that interact to overcome their limits in the collective. When CPC agents are able to move in their environment, the CPC is considered as a Mobile CPC (MCPC). The interactions of the agents give rise to the appearance of a phenomenon collectively generated by the agents of the CPC that we call a collective product. This phenomenon is not recorded as “a whole” in the CPC because an agent has only a partial view of its environment. This paper presents COPE (COllective Product Exploitation), an approach that allows one MCPC to exploit the collective product of another one. The approach is based on the deployment of meta-agents in both systems. A meta-agent is an agent that is external to a MCPC but is associated with one of its agents. Each meta-agent is able to monitor the agent with which it is associated and can fake its perceptions to influence its behavior. The meta-agents deployed in the system from which the collective product emerges provide indicators related to this product. Utilizing these indicators, the meta-agents deployed in the other system can act on the agents in order to adapt the global dynamics of the whole system. The proposed coupling approach is evaluated in a “fire detection and control” use case. It allows a system of UAVs to use the collective product of a network of sensors to monitor the fire.



One year ago, we wrote about some “high-tech” warehouse robots from Amazon that appeared to be anything but. It was confusing, honestly, to see not just hardware that looked dated but concepts about how robots should work in warehouses that seemed dated as well. Obviously we’d expected a company like Amazon to be at the forefront of developing robotic technology to make their fulfillment centers safer and more efficient. So it’s a bit of a relief that Amazon has just announced several new robotics projects that rely on sophisticated autonomy to do useful, valuable warehouse tasks.

The highlight of the announcement is Proteus, which is like one of Amazon’s Kiva shelf-transporting robots that’s smart enough (and safe enough) to transition from a highly structured environment to a moderately structured environment, an enormous challenge for any mobile robot.

Proteus is our first fully autonomous mobile robot. Historically, it’s been difficult to safely incorporate robotics in the same physical space as people. We believe Proteus will change that while remaining smart, safe, and collaborative.

Proteus autonomously moves through our facilities using advanced safety, perception, and navigation technology developed by Amazon. The robot was built to be automatically directed to perform its work and move around employees—meaning it has no need to be confined to restricted areas. It can operate in a manner that augments simple, safe interaction between technology and people—opening up a broader range of possible uses to help our employees—such as the lifting and movement of GoCarts, the nonautomated, wheeled transports used to move packages through our facilities.

I assume that moving these GoCarts around is a significant task within Amazon’s warehouse, because last year, one of the robots that Amazon introduced (and that we were most skeptical of) was designed to do exactly that. It was called Scooter, and it was this massive mobile system that required manual loading and could move only a few carts to the same place at the same time, which seemed like a super weird approach for Amazon, as I explained at the time:

We know Amazon already understands that a great way of moving carts around is by using much smaller robots that can zip underneath a cart, lift it up, and carry it around with them. Obviously, the Kiva drive units only operate in highly structured environments, but other AMR companies are making this concept work on the warehouse floor just fine.

From what I can make out from the limited information available, Proteus shows that Amazon is not, in fact, behind the curve with autonomous mobile robots (AMRs) and has actually been doing what makes sense all along, while for some reason occasionally showing us videos of other robots like Scooter and Bert in order to (I guess?) keep their actually useful platforms secret.

Anyway, Proteus looks to be a combination of one of Amazon’s newer Kiva mobile bases, along with the sensing and intelligence that allow AMRs to operate in semi structured warehouse environments alongside moderately trained humans. Its autonomy seems to be enabled by a combination of stereo-vision sensors and several planar lidars at the front and sides, a good combination for both safety and effective indoor localization in environments with a bunch of reliably static features.

I’m particularly impressed with the emphasis on human-robot interaction with Proteus, which often seems to be a secondary concern for robots designed for work in industry. The “eyes” are expressive in a minimalist sort of way, and while the front of the robot is very functional in appearance, the arrangement of the sensors and light bar also manages to give it a sort of endearingly serious face. That green light that the robot projects in front of itself also seems to be designed for human interaction—I haven’t seen any sensors that use light like that, but it seems like an effective way of letting a human know that the robot is active and moving. Overall, I think it’s cute, although very much not in a “let’s try to make this robot look cute” way, which is good.

What we’re not seeing with Proteus is all of the software infrastructure required to make it work effectively. Don’t get me wrong—making this hardware cost effective and reliable enough that Amazon can scale to however many robots it wants to scale to (likely a frighteningly large number) is a huge achievement. But there’s also all that fleet-management stuff that gets much more complicated once you have robots autonomously moving things around an active warehouse full of fragile humans who need to be both collaborated with and avoided.

Proteus is certainly the star of the show here, but Amazon did also introduce a couple of new robotic systems. One is Cardinal:

The movement of heavy packages, as well as the reduction of twisting and turning motions by employees, are areas we continually look to automate to help reduce risk of injury. Enter Cardinal, the robotic work cell that uses advanced artificial intelligence (AI) and computer vision to nimbly and quickly select one package out of a pile of packages, lift it, read the label, and precisely place it in a GoCart to send the package on the next step of its journey. Cardinal reduces the risk of employee injuries by handling tasks that require lifting and turning of large or heavy packages or complicated packing in a confined space.

The video of Cardinal looks to be a rendering, so I'm not going to spend too much time on it.

There’s also a new system for transferring pods from containers to adorable little container-hauling robots, designed to minimize the number of times that humans have to reach up or down or sideways:

It’s amazing to look at this kind of thing and realize the amount of effort that Amazon is putting in to maximize the efficiency of absolutely everything surrounding the (so far) very hard-to-replace humans in their fulfillment centers. There’s still nothing that can do a better job than our combination of eyes, brains, and hands when it comes to rapidly and reliably picking random things out of things and putting them into other things, but the sooner Amazon can solve that problem, the sooner the humans that those eyes and brains and hands belong to will be able to direct their attention to more creative and fulfilling tasks. Or that’s the idea, anyway.

Amazon says it expects Proteus to start off moving carts around in specific areas, with the hope that it’ll eventually automate cart movements in its warehouses as much as possible. And Cardinal is still in prototype form, but Amazon hopes that it’ll be deployed in fulfillment centers by next year.



One year ago, we wrote about some “high-tech” warehouse robots from Amazon that appeared to be anything but. It was confusing, honestly, to see not just hardware that looked dated but concepts about how robots should work in warehouses that seemed dated as well. Obviously we’d expected a company like Amazon to be at the forefront of developing robotic technology to make their fulfillment centers safer and more efficient. So it’s a bit of a relief that Amazon has just announced several new robotics projects that rely on sophisticated autonomy to do useful, valuable warehouse tasks.

The highlight of the announcement is Proteus, which is like one of Amazon’s Kiva shelf-transporting robots that’s smart enough (and safe enough) to transition from a highly structured environment to a moderately structured environment, an enormous challenge for any mobile robot.

Proteus is our first fully autonomous mobile robot. Historically, it’s been difficult to safely incorporate robotics in the same physical space as people. We believe Proteus will change that while remaining smart, safe, and collaborative.

Proteus autonomously moves through our facilities using advanced safety, perception, and navigation technology developed by Amazon. The robot was built to be automatically directed to perform its work and move around employees—meaning it has no need to be confined to restricted areas. It can operate in a manner that augments simple, safe interaction between technology and people—opening up a broader range of possible uses to help our employees—such as the lifting and movement of GoCarts, the nonautomated, wheeled transports used to move packages through our facilities.

I assume that moving these GoCarts around is a significant task within Amazon’s warehouse, because last year, one of the robots that Amazon introduced (and that we were most skeptical of) was designed to do exactly that. It was called Scooter, and it was this massive mobile system that required manual loading and could move only a few carts to the same place at the same time, which seemed like a super weird approach for Amazon, as I explained at the time:

We know Amazon already understands that a great way of moving carts around is by using much smaller robots that can zip underneath a cart, lift it up, and carry it around with them. Obviously, the Kiva drive units only operate in highly structured environments, but other AMR companies are making this concept work on the warehouse floor just fine.

From what I can make out from the limited information available, Proteus shows that Amazon is not, in fact, behind the curve with autonomous mobile robots (AMRs) and has actually been doing what makes sense all along, while for some reason occasionally showing us videos of other robots like Scooter and Bert in order to (I guess?) keep their actually useful platforms secret.

Anyway, Proteus looks to be a combination of one of Amazon’s newer Kiva mobile bases, along with the sensing and intelligence that allow AMRs to operate in semi structured warehouse environments alongside moderately trained humans. Its autonomy seems to be enabled by a combination of stereo-vision sensors and several planar lidars at the front and sides, a good combination for both safety and effective indoor localization in environments with a bunch of reliably static features.

I’m particularly impressed with the emphasis on human-robot interaction with Proteus, which often seems to be a secondary concern for robots designed for work in industry. The “eyes” are expressive in a minimalist sort of way, and while the front of the robot is very functional in appearance, the arrangement of the sensors and light bar also manages to give it a sort of endearingly serious face. That green light that the robot projects in front of itself also seems to be designed for human interaction—I haven’t seen any sensors that use light like that, but it seems like an effective way of letting a human know that the robot is active and moving. Overall, I think it’s cute, although very much not in a “let’s try to make this robot look cute” way, which is good.

What we’re not seeing with Proteus is all of the software infrastructure required to make it work effectively. Don’t get me wrong—making this hardware cost effective and reliable enough that Amazon can scale to however many robots it wants to scale to (likely a frighteningly large number) is a huge achievement. But there’s also all that fleet-management stuff that gets much more complicated once you have robots autonomously moving things around an active warehouse full of fragile humans who need to be both collaborated with and avoided.

Proteus is certainly the star of the show here, but Amazon did also introduce a couple of new robotic systems. One is Cardinal:

The movement of heavy packages, as well as the reduction of twisting and turning motions by employees, are areas we continually look to automate to help reduce risk of injury. Enter Cardinal, the robotic work cell that uses advanced artificial intelligence (AI) and computer vision to nimbly and quickly select one package out of a pile of packages, lift it, read the label, and precisely place it in a GoCart to send the package on the next step of its journey. Cardinal reduces the risk of employee injuries by handling tasks that require lifting and turning of large or heavy packages or complicated packing in a confined space.

The video of Cardinal looks to be a rendering, so I'm not going to spend too much time on it.

There’s also a new system for transferring pods from containers to adorable little container-hauling robots, designed to minimize the number of times that humans have to reach up or down or sideways:

It’s amazing to look at this kind of thing and realize the amount of effort that Amazon is putting in to maximize the efficiency of absolutely everything surrounding the (so far) very hard-to-replace humans in their fulfillment centers. There’s still nothing that can do a better job than our combination of eyes, brains, and hands when it comes to rapidly and reliably picking random things out of things and putting them into other things, but the sooner Amazon can solve that problem, the sooner the humans that those eyes and brains and hands belong to will be able to direct their attention to more creative and fulfilling tasks. Or that’s the idea, anyway.

Amazon says it expects Proteus to start off moving carts around in specific areas, with the hope that it’ll eventually automate cart movements in its warehouses as much as possible. And Cardinal is still in prototype form, but Amazon hopes that it’ll be deployed in fulfillment centers by next year.



The Big Picture features technology through the lens of photographers.

Every month, IEEE Spectrum selects the most stunning technology images recently captured by photographers around the world. We choose images that reflect an important advance, or a trend, or that are just mesmerizing to look at. We feature all images on our site, and one also appears on our monthly print edition.

Enjoy the latest images, and if you have suggestions, leave a comment below.

Megatruck Runs on the Lightest Gas

Big things are happening in the world of hydrogen-powered vehicles. One of the latest monumental happenings is the debut of Anglo American’s 510-ton hydrogen-powered mining truck. The behemoth, which will put in work at a South African platinum mine, will replace an entire 40-truck fleet that services the mine. Together, those trucks consume about one million liters of diesel fuel each year. The new truck, whose power plant features eight 100-kilowatt hydrogen fuel cells and a 1.2-megawatt battery pack, is just the first earth-moving step in Anglo American’s NuGen project aimed at replacing its global fleet of 400 diesel mining trucks with hydrogen-powered versions. According to the company’s estimates, the switch will be the equivalent of taking half a million diesel-fueled passenger cars off the road.

Waldo Swiegers/Bloomberg/Getty Images


South Pole Snooping Platform

Snooping on penguins for clues regarding how they relate to their polar environment is a job for machines and not men. That is the conclusion reached by a team of researchers that is studying how climate change is threatening penguins’ icy Antarctic habitat and puzzling out how to protect the species that are native to both polar regions. Rather than subjecting members of the team to the bitter cold weather in penguins’ neighborhoods, they’re studying these ecosystems using hybrid autonomous and remote-controlled Husky UGV robots. Four-wheeled robots like the one pictured here are equipped with arrays of sensors such as cameras and RFID scanners that read ID tags in chipped penguins. These allow the research team, representing several American and European research institutes, to track individual penguins, assess how successfully they are breeding, and get a picture of overall penguin population dynamics–all from their labs and offices in more temperate climates.

Clearpath Robotics


Seeing the Whole Scene

This is not a hailstorm with pieces of ice that are straight-edged instead of ball-shaped. The image is meant to illustrate an innovation in imaging that will allow cameras to capture stunning details of objects up close and far afield at the same time. The metalens is inspired by the compound eyes of a long-extinct invertebrate sea creature that could home in on distant objects and not lose focus on things that were up close. In a single photo, the lens can produce sharp images of objects as close as 3 centimeters and as far away as 1.7 kilometers. Previously, image resolution suffered as depth of field increased, and vice versa. But researchers from several labs in China and at the National Institute of Standards and Technology (NIST) in Gaithersburg, Md., have been experimenting with metasurfaces, which are surfaces covered with forests of microscopic pillars (the array of ice-cube-like shapes in the illustration). Tuning the size and shape of the pillars and arranging them so they are separated by distances shorter than the wavelength of light makes the metasurfaces capable of capturing images with unprecedented depth of field.

NIST


Auto Body Arms Race

Painters specializing in automobile detailing might want to begin seeking out new lines of work. Their art may soon be the exclusive province of a robotic arm that can replicate images drawn on paper and in computer programs with unrivaled precision. ABB’s PixelPaint computerized arm makes painting go much faster than is possible with a human artisan because its 1,000 paint nozzles deliver paint to a car’s surface much the same way that an inkjet printer deposits pigment on a sheet of paper. Because there’s no overspray, there is no need for the time-consuming masking and tape-removal steps. This level of precision, which puts 100 percent of the paint on the car, also eliminates paint waste, so paint jobs are less expensive. Heretofore, artistic renderings still needed the expert eye and practiced hand of a skilled artist. But PixelPaint has shown itself capable of laying down designs with a level of intricacy human eyes and hands cannot execute.

ABB



The Big Picture features technology through the lens of photographers.

Every month, IEEE Spectrum selects the most stunning technology images recently captured by photographers around the world. We choose images that reflect an important advance, or a trend, or that are just mesmerizing to look at. We feature all images on our site, and one also appears on our monthly print edition.

Enjoy the latest images, and if you have suggestions, leave a comment below.

Megatruck Runs on the Lightest Gas

Big things are happening in the world of hydrogen-powered vehicles. One of the latest monumental happenings is the debut of Anglo American’s 510-ton hydrogen-powered mining truck. The behemoth, which will put in work at a South African platinum mine, will replace an entire 40-truck fleet that services the mine. Together, those trucks consume about one million liters of diesel fuel each year. The new truck, whose power plant features eight 100-kilowatt hydrogen fuel cells and a 1.2-megawatt battery pack, is just the first earth-moving step in Anglo American’s NuGen project aimed at replacing its global fleet of 400 diesel mining trucks with hydrogen-powered versions. According to the company’s estimates, the switch will be the equivalent of taking half a million diesel-fueled passenger cars off the road.

Waldo Swiegers/Bloomberg/Getty Images


South Pole Snooping Platform

Snooping on penguins for clues regarding how they relate to their polar environment is a job for machines and not men. That is the conclusion reached by a team of researchers that is studying how climate change is threatening penguins’ icy Antarctic habitat and puzzling out how to protect the species that are native to both polar regions. Rather than subjecting members of the team to the bitter cold weather in penguins’ neighborhoods, they’re studying these ecosystems using hybrid autonomous and remote-controlled Husky UGV robots. Four-wheeled robots like the one pictured here are equipped with arrays of sensors such as cameras and RFID scanners that read ID tags in chipped penguins. These allow the research team, representing several American and European research institutes, to track individual penguins, assess how successfully they are breeding, and get a picture of overall penguin population dynamics–all from their labs and offices in more temperate climates.

Clearpath Robotics


Seeing the Whole Scene

This is not a hailstorm with pieces of ice that are straight-edged instead of ball-shaped. The image is meant to illustrate an innovation in imaging that will allow cameras to capture stunning details of objects up close and far afield at the same time. The metalens is inspired by the compound eyes of a long-extinct invertebrate sea creature that could home in on distant objects and not lose focus on things that were up close. In a single photo, the lens can produce sharp images of objects as close as 3 centimeters and as far away as 1.7 kilometers. Previously, image resolution suffered as depth of field increased, and vice versa. But researchers from several labs in China and at the National Institute of Standards and Technology (NIST) in Gaithersburg, Md., have been experimenting with metasurfaces, which are surfaces covered with forests of microscopic pillars (the array of ice-cube-like shapes in the illustration). Tuning the size and shape of the pillars and arranging them so they are separated by distances shorter than the wavelength of light makes the metasurfaces capable of capturing images with unprecedented depth of field.

NIST


Auto Body Arms Race

Painters specializing in automobile detailing might want to begin seeking out new lines of work. Their art may soon be the exclusive province of a robotic arm that can replicate images drawn on paper and in computer programs with unrivaled precision. ABB’s PixelPaint computerized arm makes painting go much faster than is possible with a human artisan because its 1,000 paint nozzles deliver paint to a car’s surface much the same way that an inkjet printer deposits pigment on a sheet of paper. Because there’s no overspray, there is no need for the time-consuming masking and tape-removal steps. This level of precision, which puts 100 percent of the paint on the car, also eliminates paint waste, so paint jobs are less expensive. Heretofore, artistic renderings still needed the expert eye and practiced hand of a skilled artist. But PixelPaint has shown itself capable of laying down designs with a level of intricacy human eyes and hands cannot execute.

ABB

Pages