Feed aggregator

The construction sector is investigating wood as a highly sustainable material for fabrication of architectural elements. Several researchers in the field of construction are currently designing novel timber structures as well as novel solutions for fabricating such structures, i.e. robot technologies which allow for automation of a domain dominated by skilled craftsman. In this paper, we present a framework for closing the loop between the design and robotic assembly of timber structures. On one hand, we illustrate an extended automation process that incorporates learning by demonstration to learn and execute a complex assembly of an interlocking wooden joint. On the other hand, we describe a design case study that builds upon the specificity of this process, to achieve new designs of construction elements, which were previously only possible to be assembled by skilled craftsmen. The paper provides an overview of a process with different levels of focus, from the integration of a digital twin to timber joint design and the robotic assembly execution, to the development of a flexible robotic setup and novel assembly procedures for dealing with the complexity of the designed timber joints. We discuss synergistic results on both robotic and construction design innovation, with an outlook on future developments.

As assistive robotics has expanded to many task domains, comparing assistive strategies among the varieties of research becomes increasingly difficult. To begin to unify the disparate domains into a more general theory of assistance, we present a definition of assistance, a survey of existing work, and three key design axes that occur in many domains and benefit from the examination of assistance as a whole. We first define an assistance perspective that focuses on understanding a robot that is in control of its actions but subordinate to a user’s goals. Next, we use this perspective to explore design axes that arise from the problem of assistance more generally and explore how these axes have comparable trade-offs across many domains. We investigate how the assistive robot handles other people in the interaction, how the robot design can operate in a variety of action spaces to enact similar goals, and how assistive robots can vary the timing of their actions relative to the user’s behavior. While these axes are by no means comprehensive, we propose them as useful tools for unifying assistance research across domains and as examples of how taking a broader perspective on assistance enables more cross-domain theorizing about assistance.

Active debris removal in space has become a necessary activity to maintain and facilitate orbital operations. Current approaches tend to adopt autonomous robotic systems which are often furnished with a robotic arm to safely capture debris by identifying a suitable grasping point. These systems are controlled by mission-critical software, where a software failure can lead to mission failure which is difficult to recover from since the robotic systems are not easily accessible to humans. Therefore, verifying that these autonomous robotic systems function correctly is crucial. Formal verification methods enable us to analyse the software that is controlling these systems and to provide a proof of correctness that the software obeys its requirements. However, robotic systems tend not to be developed with verification in mind from the outset, which can often complicate the verification of the final algorithms and systems. In this paper, we describe the process that we used to verify a pre-existing system for autonomous grasping which is to be used for active debris removal in space. In particular, we formalise the requirements for this system using the Formal Requirements Elicitation Tool (FRET). We formally model specific software components of the system and formally verify that they adhere to their corresponding requirements using the Dafny program verifier. From the original FRET requirements, we synthesise runtime monitors using ROSMonitoring and show how these can provide runtime assurances for the system. We also describe our experimentation and analysis of the testbed and the associated simulation. We provide a detailed discussion of our approach and describe how the modularity of this particular autonomous system simplified the usually complex task of verifying a system post-development.

Purpose: This research aimed to evaluate medication software for a healthcare robot. Study I compared two software versions (RoboGen and RoboGen2) for system usability, speed and accuracy of medication entry; Study II evaluated system usability and community pharmacists’ views of RoboGen2.

Methods: Study I had a within-subjects experimental design and recruited 40 Health Sciences students to enter different, comparable sets of prescriptions into the two systems, in randomized order, within a limit of 15 min. Screen activity was recorded to observe prescription errors. Study II had a cross-sectional observational design and recruited 20 community pharmacists using convenience sampling. Pharmacists entered three prescriptions using RoboGen2. Participants in both studies completed the System Usability Scale (SUS) following each task. Study I participants completed a questionnaire on system preference, and Study II participants a semi-structured interview.

Results: Study I participants preferred Robogen2 (p < 0.001) due to its sleek and modern layout, good flow, ease of use, and intuitive design. SUS scores [t (40) = −3.40, p = 0.002] and speed of medication entry favored Robogen2 (t = 3.65, p < 0.001). No significance was found in accuracy (t = 1.12, p = 0.27). In study 2, pharmacists rated the usability of RoboGen2 below average. Themes from interviews were navigation and streamlining the system, ease of use, and integration with pharmacy software systems.

Conclusion: Adding safety features and better aesthetics can improve the usability and safety of a medication prescription system. Streamlining workflow and pre-populating data can increase speed of prescription entry without compromising patient safety. However, a better approach is integration with pre-existing pharmacy systems to reduce workload while incorporating safety features built into existing dispensing systems.

Social robots are reported to hold great potential for education. However, both scholars and key stakeholders worry about children’s social-emotional development being compromised. In aiming to provide new insights into the impact that social robots can have on the social-emotional development of children, the current study interviewed teachers who use social robots in their day-to-day educational practice. The results of our interviews with these experienced teachers indicate that the social robots currently used in education pose little threat to the social-emotional development of children. Children with special needs seem to be more sensitive to social-affective bonding with a robot compared to regular children. This bond seems to have positive effects in enabling them to more easily connect with their human peers and teachers. However, when robots are being introduced more regularly, daily, without the involvement of a human teacher, new issues could arise. For now, given the current state of technology and the way social robots are being applied, other (ethical) issues seem to be more urgent, such as privacy, security and the workload of teachers. Future studies should focus on these issues first, to ensure a safe and effective educational environment for both children and teachers.

Restoring and improving the ability to walk is a top priority for individuals with movement impairments due to neurological injuries. Powered exoskeletons coupled with functional electrical stimulation (FES), called hybrid exoskeletons, exploit the benefits of activating muscles and robotic assistance for locomotion. In this paper, a cable-driven lower-limb exoskeleton is integrated with FES for treadmill walking at a constant speed. A nonlinear robust controller is used to activate the quadriceps and hamstrings muscle groups via FES to achieve kinematic tracking about the knee joint. Moreover, electric motors adjust the knee joint stiffness throughout the gait cycle using an integral torque feedback controller. For the hip joint, a robust sliding-mode controller is developed to achieve kinematic tracking using electric motors. The human-exoskeleton dynamic model is derived using Lagrangian dynamics and incorporates phase-dependent switching to capture the effects of transitioning from the stance to the swing phase, and vice versa. Moreover, low-level control input switching is used to activate individual muscles and motors to achieve flexion and extension about the hip and knee joints. A Lyapunov-based stability analysis is developed to ensure exponential tracking of the kinematic and torque closed-loop error systems, while guaranteeing that the control input signals remain bounded. The developed controllers were tested in real-time walking experiments on a treadmill in three able-bodied individuals at two gait speeds. The experimental results demonstrate the feasibility of coupling a cable-driven exoskeleton with FES for treadmill walking using a switching-based control strategy and exploiting both kinematic and force feedback.

Unhealthy eating behavior is a major public health issue with serious repercussions on an individual’s health. One potential solution to overcome this problem, and help people change their eating behavior, is to develop conversational systems able to recommend healthy recipes. One challenge for such systems is to deliver personalized recommendations matching users’ needs and preferences. Beyond the intrinsic quality of the recommendation itself, various factors might also influence users’ perception of a recommendation. In this paper, we present Cora, a conversational system that recommends recipes aligned with its users’ eating habits and current preferences. Users can interact with Cora in two different ways. They can select pre-defined answers by clicking on buttons to talk to Cora or write text in natural language. Additionally, Cora can engage users through a social dialogue, or go straight to the point. Cora is also able to propose different alternatives and to justify its recipes recommendation by explaining the trade-off between them. We conduct two experiments. In the first one, we evaluate the impact of Cora’s conversational skills and users’ interaction mode on users’ perception and intention to cook the recommended recipes. Our results show that a conversational recommendation system that engages its users through a rapport-building dialogue improves users’ perception of the interaction as well as their perception of the system. In the second evaluation, we evaluate the influence of Cora’s explanations and recommendation comparisons on users’ perception. Our results show that explanations positively influence users’ perception of a recommender system. However, comparing healthy recipes with a decoy is a double-edged sword. Although such comparison is perceived as significantly more useful compared to one single healthy recommendation, explaining the difference between the decoy and the healthy recipe would actually make people less likely to use the system.

In recent years, with the rapid development of minimally invasive surgery (MIS), the lack of force sensing associated with the surgical instrument used in MIS has been increasingly a desirable technology amongst clinicians. However, it is still an open technical challenge to date since most existing tactile sensing principles are not suitable to small 3-dimensional (3D) curved surfaces often seen in surgical instruments, and as a result multi-point force detection cannot be realized. In this paper, a novel optical waveguide-based sensor was proposed to deal with the above research gap. A sensor prototype for curved surfaces resembling the surface of dissection forceps was developed and experimentally evaluated. The static parameters and dynamic response characteristics of the sensor were measured. Results show that the static hysteresis error is less than 3%, the resolution is 0.026 N, and the repeatability is less than 1.5%. Under a frequency of 12.5 Hz, the sensor could quickly measure the variation of the force signal. We demonstrated that this small and high-precision sensitive sensor design is promising to be used for creating multiple-point tactile sensing for minimally invasive surgical instruments with 3D surfaces.

Background: Damaged cardiac tissues could potentially be regenerated by transplanting bioengineered cardiac patches to the heart surface. To be fully paradigm-shifting, such patches may need to be transplanted using minimally invasive robotic cardiac surgery (not only traditional open surgery). Here, we present novel robotic designs, initial prototyping and a new surgical operation for instruments to transplant patches via robotic minimally invasive heart surgery.

Methods: Robotic surgical instruments and automated control systems were designed, tested with simulation software and prototyped. Surgical proof-of-concept testing was performed on a pig cadaver.

Results: Three robotic instrument designs were developed. The first (called “Claw” for the claw-like patch holder at the tip) operates on a rack and pinion mechanism. The second design (“Shell-Beak”) uses adjustable folding plates and rods with a bevel gear mechanism. The third (“HeartStamp”) utilizes a stamp platform protruding through an adjustable ring. For the HeartStamp, rods run through a cylindrical structure designed to fit a uniportal Video-Assisted Thorascopic Surgery (VATS) surgical port. Designed to work with or without a sterile sheath, the patch is pushed out by the stamp platform as it protrudes. Two instrument robotic control systems were designed, simulated in silico and one of these underwent early ‘sizing and learning’ prototyping as a proof-of-concept. To reflect real surgical conditions, surgery was run “live” and reported exactly (as-it-happened). We successfully picked up, transferred and released a patch onto the heart using the HeartStamp in a pig cadaver model.

Conclusion: These world-first designs, early prototypes and a novel surgical operation pave the way for robotic instruments for automated keyhole patch transplantation to the heart. Our novel approach is presented for others to build upon free from restrictions or cost—potentially a significant moment in myocardial regeneration surgery which may open a therapeutic avenue for patients unfit for traditional open surgery.

We study two approaches for predicting an appropriate pose for a robot to take part in group formations typical of social human conversations subject to the physical layout of the surrounding environment. One method is model-based and explicitly encodes key geometric aspects of conversational formations. The other method is data-driven. It implicitly models key properties of spatial arrangements using graph neural networks and an adversarial training regimen. We evaluate the proposed approaches through quantitative metrics designed for this problem domain and via a human experiment. Our results suggest that the proposed methods are effective at reasoning about the environment layout and conversational group formations. They can also be used repeatedly to simulate conversational spatial arrangements despite being designed to output a single pose at a time. However, the methods showed different strengths. For example, the geometric approach was more successful at avoiding poses generated in nonfree areas of the environment, but the data-driven method was better at capturing the variability of conversational spatial formations. We discuss ways to address open challenges for the pose generation problem and other interesting avenues for future work.

Automated surface vessels must integrate many tasks and motions at the same time. Moreover, vessels as well as monitoring and control services need to react to physical disturbances, to dynamically allocate software resources available within a particular environment, and to communicate with various other actors in particular navigation and traffic situations. In this work, the responsibility for the situational awareness is given to a mediator that decides how: 1) to assess the impact of the actual physical environment on the quality and performance of the ongoing task executions; 2) to make sure these tasks satisfy the system requirements; and 3) to be robust against disturbances. This paper proposes a set of semantic world models within the context of inland waterway transport, and discusses policies and methodologies to compose, use, and connect these models. Model-conform entities and relations are composed dynamically, that is, corresponding to the opportunities and challenges offered by the actual situation. The semantic world models discussed in this work are divided into two main categories: 1) the semantic description of a vessel’s own properties and relationships, called the internal world model, or body model, and 2) the semantic description of its local environment, called the external world model, or map. A range of experiments illustrate the potential of using such models to decide the reactions of the application at runtime. Furthermore, three dynamic, context-dependent, ship domains are integrated in the map as two-dimensional geometric entities around a moving vessel to increase the situational awareness of automated vessels. Their geometric representations depend on the associated relations; for example, with: 1) the motion of the vessel, 2) the actual, desired, or hypothesised tasks, 3) perception sensor information, and 4) other geometries, e.g., features from the Inland Electronic Navigational Charts. The ability to unambiguously understand the environmental context, as well as the motion or position of surrounding entities, allows for resource-efficient and straightforward control decisions. The semantic world models facilitate knowledge sharing between actors, and significantly enhance explainability of the actors’ behaviour and control decisions.

As voice-user interfaces (VUIs), such as smart speakers like Amazon Alexa or social robots like Jibo, enter multi-user environments like our homes, it is critical to understand how group members perceive and interact with these devices. VUIs engage socially with users, leveraging multi-modal cues including speech, graphics, expressive sounds, and movement. The combination of these cues can affect how users perceive and interact with these devices. Through a set of three elicitation studies, we explore family interactions (N = 34 families, 92 participants, ages 4–69) with three commercially available VUIs with varying levels of social embodiment. The motivation for these three studies began when researchers noticed that families interacted differently with three agents when familiarizing themselves with the agents and, therefore, we sought to further investigate this trend in three subsequent studies designed as a conceptional replication study. Each study included three activities to examine participants’ interactions with and perceptions of the three VUIS in each study, including an agent exploration activity, perceived personality activity, and user experience ranking activity. Consistent for each study, participants interacted significantly more with an agent with a higher degree of social embodiment, i.e., a social robot such as Jibo, and perceived the agent as more trustworthy, having higher emotional engagement, and having higher companionship. There were some nuances in interaction and perception with different brands and types of smart speakers, i.e., Google Home versus Amazon Echo, or Amazon Show versus Amazon Echo Spot between the studies. In the last study, a behavioral analysis was conducted to investigate interactions between family members and with the VUIs, revealing that participants interacted more with the social robot and interacted more with their family members around the interactions with the social robot. This paper explores these findings and elaborates upon how these findings can direct future VUI development for group settings, especially in familial settings.

Catheter-based endovascular interventional procedures have become increasingly popular in recent years as they are less invasive and patients spend less time in the hospital with less recovery time and less pain. These advantages have led to a significant growth in the number of procedures that are performed annually. However, it is still challenging to position a catheter in a target vessel branch within the highly complicated and delicate vascular structure. In fact, vessel tortuosity and angulation, which cause difficulties in catheterization and reaching the target site, have been reported as the main causes of failure in endovascular procedures. Maneuverability of a catheter for intravascular navigation is a key to reaching the target area; ability of a catheter to move within the target vessel during trajectory tracking thus affects to a great extent the length and success of the procedure. To address this issue, this paper models soft catheter robots with multiple actuators and provides a time-dependent model for characterizing the dynamics of multi-actuator soft catheter robots. Built on this model, an efficient and scalable optimization-based framework is developed for guiding the catheter to pass through arteries and reach the target where an aneurysm is located. The proposed framework models the deflection of the multi-actuator soft catheter robot and develops a control strategy for movement of catheter along a desired trajectory. This provides a simulation-based framework for selection of catheters prior to endovascular catheterization procedures, assuring that given a fixed design, the catheter is able to reach the target location. The results demonstrate the benefits that can be achieved by design and control of catheters with multiple number of actuators for navigation into small vessels.

To fully understand the evolution of complex morphologies, analyses cannot stop at selection: It is essential to investigate the roles and interactions of multiple processes that drive evolutionary outcomes. The challenges of undertaking such analyses have affected both evolutionary biologists and evolutionary roboticists, with their common interests in complex morphologies. In this paper, we present analytical techniques from evolutionary biology, selection gradient analysis and morphospace walks, and we demonstrate their applicability to robot morphologies in analyses of three evolutionary mechanisms: randomness (genetic mutation), development (an explicitly implemented genotype-to-phenotype map), and selection. In particular, we applied these analytical techniques to evolved populations of simulated biorobots—embodied robots designed specifically as models of biological systems, for the testing of biological hypotheses—and we present a variety of results, including analyses that do all of the following: illuminate different evolutionary dynamics for different classes of morphological traits; illustrate how the traits targeted by selection can vary based on the likelihood of random genetic mutation; demonstrate that selection on two selected sets of morphological traits only partially explains the variance in fitness in our biorobots; and suggest that biases in developmental processes could partially explain evolutionary dynamics of morphology. When combined, the complementary analytical approaches discussed in this paper can enable insight into evolutionary processes beyond selection and thereby deepen our understanding of the evolution of robotic morphologies.

The growing interest in soft robotics has resulted in an increased demand for accurate and reliable material modelling. As soft robots experience high deformations, highly nonlinear behavior is possible. Several analytical models that are able to capture this nonlinear behavior have been proposed, however, accurately calibrating them for specific materials and applications can be challenging. Multiple experimental testbeds may be required for material characterization which can be expensive and cumbersome. In this work, we propose an alternative framework for parameter fitting established hyperelastic material models, with the aim of improving their utility in the modelling of soft continuum robots. We define a minimization problem to reduce fitting errors between a soft continuum robot deformed experimentally and its equivalent finite element simulation. The soft material is characterized using four commonly employed hyperelastic material models (Neo Hookean; Mooney–Rivlin; Yeoh; and Ogden). To meet the complexity of the defined problem, we use an evolutionary algorithm to navigate the search space and determine optimal parameters for a selected material model and a specific actuation method, naming this approach as Evolutionary Inverse Material Identification (EIMI). We test the proposed approach with a magnetically actuated soft robot by characterizing two polymers often employed in the field: Dragon Skin™ 10 MEDIUM and Ecoflex™ 00-50. To determine the goodness of the FEM simulation for a specific set of model parameters, we define a function that measures the distance between the mesh of the FEM simulation and the experimental data. Our characterization framework showed an improvement greater than 6% compared to conventional model fitting approaches at different strain ranges based on the benchmark defined. Furthermore, the low variability across the different models obtained using our approach demonstrates reduced dependence on model and strain-range selection, making it well suited to application-specific soft robot modelling.

Metallic tools such as graspers, forceps, spatulas, and clamps have been used in proximity to delicate neurological tissue and the risk of damage to this tissue is a primary concern for neurosurgeons. Novel soft robotic technologies have the opportunity to shift the design paradigm for these tools towards safer and more compliant, minimally invasive methods. Here, we present a pneumatically actuated, origami-inspired deployable brain retractor aimed at atraumatic surgical workspace generation inside the cranial cavity. We discuss clinical requirements, design, fabrication, analytical modeling, experimental characterization, and in-vitro validation of the proposed device on a brain model.

Despite developments in robotics and automation technologies, several challenges need to be addressed to fulfill the high demand for automating various manufacturing processes in the food industry. In our opinion, these challenges can be classified as: the development of robotic end-effectors to cope with large variations of food products with high practicality and low cost, recognition of food products and materials in 3D scenario, better understanding of fundamental information of food products including food categorization and physical properties from the viewpoint of robotic handling. In this review, we first introduce the challenges in robotic food handling and then highlight the advances in robotic end-effectors, food recognition, and fundamental information of food products related to robotic food handling. Finally, future research directions and opportunities are discussed based on an analysis of the challenges and state-of-the-art developments.

The robot rights debate has thus far proceeded without any reliable data concerning the public opinion about robots and the rights they should have. We have administered an online survey (n = 439) that investigates layman’s attitudes toward granting particular rights to robots. Furthermore, we have asked them the reasons for their willingness to grant them those rights. Finally, we have administered general perceptions of robots regarding appearance, capacities, and traits. Results show that rights can be divided in sociopolitical and robot dimensions. Reasons can be distinguished along cognition and compassion dimensions. People generally have a positive view about robot interaction capacities. We found that people are more willing to grant basic robot rights such as access to energy and the right to update to robots than sociopolitical rights such as voting rights and the right to own property. Attitudes toward granting rights to robots depend on the cognitive and affective capacities people believe robots possess or will possess in the future. Our results suggest that the robot rights debate stands to benefit greatly from a common understanding of the capacity potentials of future robots.

As mobile robots are increasingly introduced into our daily lives, it grows ever more imperative that these robots navigate with and among people in a safe and socially acceptable manner, particularly in shared spaces. While research on enabling socially-aware robot navigation has expanded over the years, there are no agreed-upon evaluation protocols or benchmarks to allow for the systematic development and evaluation of socially-aware navigation. As an effort to aid more productive development and progress comparisons, in this paper we review the evaluation methods, scenarios, datasets, and metrics commonly used in previous socially-aware navigation research, discuss the limitations of existing evaluation protocols, and highlight research opportunities for advancing socially-aware robot navigation.

In this paper, we investigate the impact of sensory sensitivity during robot-assisted training for children diagnosed with Autism Spectrum Disorder (ASD). Indeed, user-adaptation for robot-based therapies could help users to focus on the training, and thus improve the benefits of the interactions. Children diagnosed with ASD often suffer from sensory sensitivity, and can show hyper or hypo-reactivity to sensory events, such as reacting strongly or not at all to sounds, movements, or touch. Considering it during robot therapies may improve the overall interaction. In the present study, thirty-four children diagnosed with ASD underwent a joint attention training with the robot Cozmo. The eight session training was embedded in the standard therapy. The children were screened for their sensory sensitivity with the Sensory Profile Checklist Revised. Their social skills were screened before and after the training with the Early Social Communication Scale. We recorded their performance and the amount of feedback they were receiving from the therapist through animations of happy and sad emotions played on the robot. Our results showed that visual and hearing sensitivity influenced the improvements of the skill to initiate joint attention. Also, the therapists of individuals with a high sensitivity to hearing chose to play fewer animations of the robot during the training phase of the robot activity. The animations did not include sounds, but the robot was producing motor noise. These results are supporting the idea that sensory sensitivity of children diagnosed with ASD should be screened prior to engaging the children in robot-assisted therapy.

Pages