A real-time adaptive roles allocation method based on reinforcement learning is proposed to improve humanrobot cooperation performance for a curtain wall installation task.This method breaks the traditional idea that ...A real-time adaptive roles allocation method based on reinforcement learning is proposed to improve humanrobot cooperation performance for a curtain wall installation task.This method breaks the traditional idea that the robot is regarded as the follower or only adjusts the leader and the follower in cooperation.In this paper,a self-learning method is proposed which can dynamically adapt and continuously adjust the initiative weight of the robot according to the change of the task.Firstly,the physical human-robot cooperation model,including the role factor is built.Then,a reinforcement learningmodel that can adjust the role factor in real time is established,and a reward and actionmodel is designed.The role factor can be adjusted continuously according to the comprehensive performance of the human-robot interaction force and the robot’s Jerk during the repeated installation.Finally,the roles adjustment rule established above continuously improves the comprehensive performance.Experiments of the dynamic roles allocation and the effect of the performance weighting coefficient on the result have been verified.The results show that the proposed method can realize the role adaptation and achieve the dual optimization goal of reducing the sum of the cooperator force and the robot’s Jerk.展开更多
Human–robot(HR)collaboration(HRC)is an emerging research field because of the complementary advantages of humans and robots.An HRC framework for robotic assembly based on impedance control is proposed in this paper.I...Human–robot(HR)collaboration(HRC)is an emerging research field because of the complementary advantages of humans and robots.An HRC framework for robotic assembly based on impedance control is proposed in this paper.In the HRC framework,the human is the decision maker,the robot acts as the executor,while the assembly environment provides constraints.The robot is the main executor to perform the assembly action,which has the position control,drag and drop,positive impedance control,and negative impedance control modes.To reveal the characteristics of the HRC framework,the switch condition map of different control modes and the stability analysis of the HR coupled system are discussed.In the end,HRC assembly experiments are conducted,where the HRC assembly task can be accomplished when the assembling tolerance is 0.08 mm or with the interference fit.Experiments show that the HRC assembly has the complementary advantages of humans and robots and is efficient in finishing complex assembly tasks.展开更多
Navigation is an essential skill for robots.It becomes a cumbersome task for the robot in a human-populated environment,and Industry 5.0 is an emerging trend that focuses on the interaction between humans and robots.R...Navigation is an essential skill for robots.It becomes a cumbersome task for the robot in a human-populated environment,and Industry 5.0 is an emerging trend that focuses on the interaction between humans and robots.Robot behavior in a social setting is the key to human acceptance while ensuring human comfort and safety.With the advancement in robotics technology,the true use cases of robots in the tourism and hospitality industry are expanding in number.There are very few experimental studies focusing on how people perceive the navigation behavior of a delivery robot.A robotic platform named“PI”has been designed,which incorporates proximity and vision sensors.The robot utilizes a real-time object recognition algorithm based on the You Only Look Once(YOLO)algorithm to detect objects and humans during navigation.This study is aimed towards evaluating human experience,for which we conducted a study among 36 participants to explore the perceived social presence,role,and perception of a delivery robot exhibiting different behavior conditions while navigating in a hotel corridor.The participants’responses were collected and compared for different behavior conditions demonstrated by the robot and results show that humans prefer an assistant role of a robot enabled with audio and visual aids exhibiting social behavior.Further,this study can be useful for developers to gain insight into the expected behavior of a delivery robot.展开更多
This paper presents an innovative investigation on prototyping a digital twin(DT)as the platform for human-robot interactive welding and welder behavior analysis.This humanrobot interaction(HRI)working style helps to ...This paper presents an innovative investigation on prototyping a digital twin(DT)as the platform for human-robot interactive welding and welder behavior analysis.This humanrobot interaction(HRI)working style helps to enhance human users'operational productivity and comfort;while data-driven welder behavior analysis benefits to further novice welder training.This HRI system includes three modules:1)a human user who demonstrates the welding operations offsite with her/his operations recorded by the motion-tracked handles;2)a robot that executes the demonstrated welding operations to complete the physical welding tasks onsite;3)a DT system that is developed based on virtual reality(VR)as a digital replica of the physical human-robot interactive welding environment.The DT system bridges a human user and robot through a bi-directional information flow:a)transmitting demonstrated welding operations in VR to the robot in the physical environment;b)displaying the physical welding scenes to human users in VR.Compared to existing DT systems reported in the literatures,the developed one provides better capability in engaging human users in interacting with welding scenes,through an augmented VR.To verify the effectiveness,six welders,skilled with certain manual welding training and unskilled without any training,tested the system by completing the same welding job;three skilled welders produce satisfied welded workpieces,while the other three unskilled do not.A data-driven approach as a combination of fast Fourier transform(FFT),principal component analysis(PCA),and support vector machine(SVM)is developed to analyze their behaviors.Given an operation sequence,i.e.,motion speed sequence of the welding torch,frequency features are firstly extracted by FFT and then reduced in dimension through PCA,which are finally routed into SVM for classification.The trained model demonstrates a 94.44%classification accuracy in the testing dataset.The successful pattern recognition in skilled welder operations should benefit to accelerate novice welder training.展开更多
A facial expression emotion recognition based human-robot interaction(FEER-HRI) system is proposed, for which a four-layer system framework is designed. The FEERHRI system enables the robots not only to recognize huma...A facial expression emotion recognition based human-robot interaction(FEER-HRI) system is proposed, for which a four-layer system framework is designed. The FEERHRI system enables the robots not only to recognize human emotions, but also to generate facial expression for adapting to human emotions. A facial emotion recognition method based on2D-Gabor, uniform local binary pattern(LBP) operator, and multiclass extreme learning machine(ELM) classifier is presented,which is applied to real-time facial expression recognition for robots. Facial expressions of robots are represented by simple cartoon symbols and displayed by a LED screen equipped in the robots, which can be easily understood by human. Four scenarios,i.e., guiding, entertainment, home service and scene simulation are performed in the human-robot interaction experiment, in which smooth communication is realized by facial expression recognition of humans and facial expression generation of robots within 2 seconds. As a few prospective applications, the FEERHRI system can be applied in home service, smart home, safe driving, and so on.展开更多
In this paper,we present a novel data-driven design method for the human-robot interaction(HRI)system,where a given task is achieved by cooperation between the human and the robot.The presented HRI controller design i...In this paper,we present a novel data-driven design method for the human-robot interaction(HRI)system,where a given task is achieved by cooperation between the human and the robot.The presented HRI controller design is a two-level control design approach consisting of a task-oriented performance optimization design and a plant-oriented impedance controller design.The task-oriented design minimizes the human effort and guarantees the perfect task tracking in the outer-loop,while the plant-oriented achieves the desired impedance from the human to the robot manipulator end-effector in the inner-loop.Data-driven reinforcement learning techniques are used for performance optimization in the outer-loop to assign the optimal impedance parameters.In the inner-loop,a velocity-free filter is designed to avoid the requirement of end-effector velocity measurement.On this basis,an adaptive controller is designed to achieve the desired impedance of the robot manipulator in the task space.The simulation and experiment of a robot manipulator are conducted to verify the efficacy of the presented HRI design framework.展开更多
With the increasing presence of robots in our daily life,there is a strong need and demand for the strategies to acquire a high quality interaction between robots and users by enabling robots to understand users’mood...With the increasing presence of robots in our daily life,there is a strong need and demand for the strategies to acquire a high quality interaction between robots and users by enabling robots to understand users’mood,intention,and other aspects.During human-human interaction,personality traits have an important influence on human behavior,decision,mood,and many others.Therefore,we propose an efficient computational framework to endow the robot with the capability of understanding the user’s personality traits based on the user’s nonverbal communication cues represented by three visual features including the head motion,gaze,and body motion energy,and three vocal features including voice pitch,voice energy,and mel-frequency cepstral coefficient(MFCC).We used the Pepper robot in this study as a communication robot to interact with each participant by asking questions,and meanwhile,the robot extracts the nonverbal features from each participant’s habitual behavior using its on-board sensors.On the other hand,each participant’s personality traits are evaluated with a questionnaire.We then train the ridge regression and linear support vector machine(SVM)classifiers using the nonverbal features and personality trait labels from a questionnaire and evaluate the performance of the classifiers.We have verified the validity of the proposed models that showed promising binary classification performance on recognizing each of the Big Five personality traits of the participants based on individual differences in nonverbal communication cues.展开更多
A more natural way for non-expert users to express their tasks in an open-ended set is to use natural language. In this case,a human-centered intelligent agent/robot is required to be able to understand and generate p...A more natural way for non-expert users to express their tasks in an open-ended set is to use natural language. In this case,a human-centered intelligent agent/robot is required to be able to understand and generate plans for these naturally expressed tasks. For this purpose, it is a good way to enhance intelligent robot's abilities by utilizing open knowledge extracted from the web, instead of hand-coded knowledge. A key challenge of utilizing open knowledge lies in the semantic interpretation of the open knowledge organized in multiple modes, which can be unstructured or semi-structured, before one can use it.Previous approaches used a limited lexicon to employ combinatory categorial grammar(CCG) as the underlying formalism for semantic parsing over sentences. Here, we propose a more effective learning method to interpret semi-structured user instructions. Moreover, we present a new heuristic method to recover missing semantic information from the context of an instruction. Experiments showed that the proposed approach renders significant performance improvement compared to the baseline methods and the recovering method is promising.展开更多
Human-robot interaction(HRI) is fundamental for human-centered robotics, and has been attracting intensive research for more than a decade. The series elastic actuator(SEA) provides inherent compliance, safety and fur...Human-robot interaction(HRI) is fundamental for human-centered robotics, and has been attracting intensive research for more than a decade. The series elastic actuator(SEA) provides inherent compliance, safety and further benefits for HRI, but the introduced elastic element also brings control difficulties. In this paper, we address the stiffness rendering problem for a cable-driven SEA system, to achieve either low stiffness for good transparency or high stiffness bigger than the physical spring constant, and to assess the rendering accuracy with quantified metrics. By taking a velocity-sourced model of the motor, a cascaded velocity-torque-impedance control structure is established. To achieve high fidelity torque control, the 2-DOF(degree of freedom) stabilizing control method together with a compensator has been used to handle the competing requirements on tracking performance, noise and disturbance rejection,and energy optimization in the cable-driven SEA system. The conventional passivity requirement for HRI usually leads to a conservative design of the impedance controller, and the rendered stiffness cannot go higher than the physical spring constant. By adding a phase-lead compensator into the impedance controller,the stiffness rendering capability was augmented with guaranteed relaxed passivity. Extensive simulations and experiments have been performed, and the virtual stiffness has been rendered in the extended range of 0.1 to 2.0 times of the physical spring constant with guaranteed relaxed passivity for physical humanrobot interaction below 5 Hz. Quantified metrics also verified good rendering accuracy.展开更多
With the increasing of the elderly population and the growing hearth care cost, the role of service robots in aiding the disabled and the elderly is becoming important. Many researchers in the world have paid much att...With the increasing of the elderly population and the growing hearth care cost, the role of service robots in aiding the disabled and the elderly is becoming important. Many researchers in the world have paid much attention to heaRthcare robots and rehabilitation robots. To get natural and harmonious communication between the user and a service robot, the information perception/feedback ability, and interaction ability for service robots become more important in many key issues.展开更多
This paper proposes a novel approach for physical human-robot interactions(pHRI), where a robot provides guidance forces to a user based on the user performance. This framework tunes the forces in regards to behavior ...This paper proposes a novel approach for physical human-robot interactions(pHRI), where a robot provides guidance forces to a user based on the user performance. This framework tunes the forces in regards to behavior of each user in coping with different tasks, where lower performance results in higher intervention from the robot. This personalized physical human-robot interaction(p2HRI) method incorporates adaptive modeling of the interaction between the human and the robot as well as learning from demonstration(LfD) techniques to adapt to the users' performance. This approach is based on model predictive control where the system optimizes the rendered forces by predicting the performance of the user. Moreover, continuous learning of the user behavior is added so that the models and personalized considerations are updated based on the change of user performance over time. Applying this framework to a field such as haptic guidance for skill improvement, allows a more personalized learning experience where the interaction between the robot as the intelligent tutor and the student as the user,is better adjusted based on the skill level of the individual and their gradual improvement. The results suggest that the precision of the model of the interaction is improved using this proposed method,and the addition of the considered personalized factors to a more adaptive strategy for rendering of guidance forces.展开更多
The unmanned aircraft vehicles industry is in the ascendant while traditional interaction ways for an unmanned aerial vehicle(UAV)are not intuitive enough.It is difficult for a beginner to control a UAV,therefore natu...The unmanned aircraft vehicles industry is in the ascendant while traditional interaction ways for an unmanned aerial vehicle(UAV)are not intuitive enough.It is difficult for a beginner to control a UAV,therefore natural interaction methods are preferred.This paper presents a novel interactive control method for a UAV through operator's gesture,and explores the natural interaction method for the UAV.The proposed system uses the leap motion controller as an input device acquiring the gesture position and orientation data.It is found that the proposed human-robot interface can track the movement of the operator with satisfactory accuracy.The biggest advantage of the proposed method is its capability to control the UAV by just one hand instead of a joystick.A series of experiments verified the feasibility of the proposed human-robot interface.The results demonstrate that non-professional operators can easily operate a remote UAV by just using this system.展开更多
To build robots that engage in intuitive communication with people by natural language, we are developing a new knowledge representation called conceptual network model. The conceptual network connects natural languag...To build robots that engage in intuitive communication with people by natural language, we are developing a new knowledge representation called conceptual network model. The conceptual network connects natural language concepts with visual perception including color perception, shape perception, size perception, and spatial perception. In the implementation of spatial perception, we present a computational model based on spatial template theory to interpret qualitative spatial expressions. Based on the conceptual network model, our mobile robot can understand user's instructions and recognize the object referred to by the user and perform appropriate action. Experimental results show our approach promising.展开更多
The humanoid robot head plays an important role in the emotional expression of human-robot interaction(HRI).They are emerging in industrial manufacturing,business reception,entertainment,teaching assistance,and tour g...The humanoid robot head plays an important role in the emotional expression of human-robot interaction(HRI).They are emerging in industrial manufacturing,business reception,entertainment,teaching assistance,and tour guides.In recent years,significant progress has been made in the field of humanoid robots.Nevertheless,there is still a lack of humanoid robots that can interact with humans naturally and comfortably.This review comprises a comprehensive survey of state-of-the-art technologies for humanoid robot heads over the last three decades,which covers the aspects of mechanical structures,actuators and sensors,anthropomorphic behavior control,emotional expression,and human-robot interaction.Finally,the current challenges and possible future directions are discussed.展开更多
Human-robot object handover is one of the most primitive and crucial capabilities in human-robot collaboration.It is of great significance to promote robots to truly enter human production and life scenarios and serve...Human-robot object handover is one of the most primitive and crucial capabilities in human-robot collaboration.It is of great significance to promote robots to truly enter human production and life scenarios and serve human in numerous tasks.Remarkable progressions in the field of human-robot object handover have been made by researchers.This article reviews the recent literature on human-robot object handover.To this end,we summarize the results from multiple dimensions,from the role played by the robot(receiver or giver),to the end-effector of the robot(parallel-jaw gripper or multi-finger hand),to the robot abilities(grasp strategy or motion planning).We also implement a human-robot object handover system for anthropomorphic hand to verify human-robot object handover pipeline.This review aims to provide researchers and developers with a guideline for designing human-robot object handover methods.展开更多
Human–robot interface(HRI)electronics are critical for realizing robotic intelligence.Here,we report graphene-based dual-function acoustic transducers for machine learning-assisted human–robot interfaces(GHRI).The G...Human–robot interface(HRI)electronics are critical for realizing robotic intelligence.Here,we report graphene-based dual-function acoustic transducers for machine learning-assisted human–robot interfaces(GHRI).The GHRI functions both an artificial ear through the triboelectric acoustic sensing mechanism and an artificial mouth through the thermoacoustic sound emission mechanism.The success of the integrated device is also attributed to the multifunctional laser-induced graphene,as either triboelectric materials,electrodes,or thermoacoustic sources.By systematically optimizing the structure parameters,the GHRI achieves high sensitivity(4500 mV Pa^(–1))and operating durability(1000000 cycles and 60 days),capable of recognizing speech identities,emotions,content,and other information in the human speech.With the assistance of machine learning,30 speech categories are trained by a convolutional neural network,and the accuracy reaches 99.66%and 96.63%in training datasets and test datasets.Furthermore,GHRI is used for artificial intelligence communication based on recognized speech features.Our work shows broad prospects for the development of robotic intelligence.展开更多
Owing to the constraints of unstructured environments,it is difficult to ensure safe,accurate,and smooth completion of tasks using autonomous robots.Moreover,for small-batch and customized tasks,autonomous operation r...Owing to the constraints of unstructured environments,it is difficult to ensure safe,accurate,and smooth completion of tasks using autonomous robots.Moreover,for small-batch and customized tasks,autonomous operation requires path planning for each task,thus reducing efficiency.We propose a human-robot shared control system based on a 3D point cloud and teleoperation for a robot to assist human operators in the performance of dangerous and cumbersome tasks.The system leverages the operator’s skills and experience to deal with emergencies and perform online error correction.In this framework,a depth camera acquires the 3D point cloud of the target object to automatically adjust the end-effector orientation.The operator controls the manipulator trajectory through a teleoperation device.The force exerted by the manipulator on the object is automatically adjusted by the robot,thus reducing the workload for the operator and improving the efficiency of task execution.In addition,hybrid force/motion control is used to decouple teleoperation from force control to ensure that force and position regulation will not interfere with each other.The proposed framework was validated using the ELITE robot to perform a force control scanning task.展开更多
Intuitive and efficient interfaces for human- robot interaction (HRI) have been a challenging issue in robotics as it is essential for the prevalence of robots supporting humans in key areas of activities. This pape...Intuitive and efficient interfaces for human- robot interaction (HRI) have been a challenging issue in robotics as it is essential for the prevalence of robots supporting humans in key areas of activities. This paper presents a novel augmented reality (AR) based interface to facilitate human-virtual robot interaction. A number of human-virtual robot interaction methods have been for- mulated and implemented with respect to the various types of operations needed in different robotic applications. A Euclidean distance-based method is developed to assist the users in the interaction with the virtual robot and the spatial entities in an AR environment. A monitor-based visualization mode is adopted as it enables the users to perceive the virtual contents associated with different interaction methods, and the virtual content augmented in the real environment is informative and useful to the users during their interaction with the virtual robot. Case researches are presented to demonstrate the successful implementation of the AR-based HRI interface in planning robot pick-and-place operations and path following operations.展开更多
基金The research has been generously supported by Tianjin Education Commission Scientific Research Program(2020KJ056),ChinaTianjin Science and Technology Planning Project(22YDTPJC00970),China.The authors would like to express their sincere appreciation for all support provided.
文摘A real-time adaptive roles allocation method based on reinforcement learning is proposed to improve humanrobot cooperation performance for a curtain wall installation task.This method breaks the traditional idea that the robot is regarded as the follower or only adjusts the leader and the follower in cooperation.In this paper,a self-learning method is proposed which can dynamically adapt and continuously adjust the initiative weight of the robot according to the change of the task.Firstly,the physical human-robot cooperation model,including the role factor is built.Then,a reinforcement learningmodel that can adjust the role factor in real time is established,and a reward and actionmodel is designed.The role factor can be adjusted continuously according to the comprehensive performance of the human-robot interaction force and the robot’s Jerk during the repeated installation.Finally,the roles adjustment rule established above continuously improves the comprehensive performance.Experiments of the dynamic roles allocation and the effect of the performance weighting coefficient on the result have been verified.The results show that the proposed method can realize the role adaptation and achieve the dual optimization goal of reducing the sum of the cooperator force and the robot’s Jerk.
基金supported in part by the National Natural Science Foundation of China(62293514,52275020,and 91948301)。
文摘Human–robot(HR)collaboration(HRC)is an emerging research field because of the complementary advantages of humans and robots.An HRC framework for robotic assembly based on impedance control is proposed in this paper.In the HRC framework,the human is the decision maker,the robot acts as the executor,while the assembly environment provides constraints.The robot is the main executor to perform the assembly action,which has the position control,drag and drop,positive impedance control,and negative impedance control modes.To reveal the characteristics of the HRC framework,the switch condition map of different control modes and the stability analysis of the HR coupled system are discussed.In the end,HRC assembly experiments are conducted,where the HRC assembly task can be accomplished when the assembling tolerance is 0.08 mm or with the interference fit.Experiments show that the HRC assembly has the complementary advantages of humans and robots and is efficient in finishing complex assembly tasks.
基金supported by Taif University Researchers Supporting Projects(TURSP).Under number(TURSP-2020/211),Taif University,Taif,Saudi Arabia.
文摘Navigation is an essential skill for robots.It becomes a cumbersome task for the robot in a human-populated environment,and Industry 5.0 is an emerging trend that focuses on the interaction between humans and robots.Robot behavior in a social setting is the key to human acceptance while ensuring human comfort and safety.With the advancement in robotics technology,the true use cases of robots in the tourism and hospitality industry are expanding in number.There are very few experimental studies focusing on how people perceive the navigation behavior of a delivery robot.A robotic platform named“PI”has been designed,which incorporates proximity and vision sensors.The robot utilizes a real-time object recognition algorithm based on the You Only Look Once(YOLO)algorithm to detect objects and humans during navigation.This study is aimed towards evaluating human experience,for which we conducted a study among 36 participants to explore the perceived social presence,role,and perception of a delivery robot exhibiting different behavior conditions while navigating in a hotel corridor.The participants’responses were collected and compared for different behavior conditions demonstrated by the robot and results show that humans prefer an assistant role of a robot enabled with audio and visual aids exhibiting social behavior.Further,this study can be useful for developers to gain insight into the expected behavior of a delivery robot.
文摘This paper presents an innovative investigation on prototyping a digital twin(DT)as the platform for human-robot interactive welding and welder behavior analysis.This humanrobot interaction(HRI)working style helps to enhance human users'operational productivity and comfort;while data-driven welder behavior analysis benefits to further novice welder training.This HRI system includes three modules:1)a human user who demonstrates the welding operations offsite with her/his operations recorded by the motion-tracked handles;2)a robot that executes the demonstrated welding operations to complete the physical welding tasks onsite;3)a DT system that is developed based on virtual reality(VR)as a digital replica of the physical human-robot interactive welding environment.The DT system bridges a human user and robot through a bi-directional information flow:a)transmitting demonstrated welding operations in VR to the robot in the physical environment;b)displaying the physical welding scenes to human users in VR.Compared to existing DT systems reported in the literatures,the developed one provides better capability in engaging human users in interacting with welding scenes,through an augmented VR.To verify the effectiveness,six welders,skilled with certain manual welding training and unskilled without any training,tested the system by completing the same welding job;three skilled welders produce satisfied welded workpieces,while the other three unskilled do not.A data-driven approach as a combination of fast Fourier transform(FFT),principal component analysis(PCA),and support vector machine(SVM)is developed to analyze their behaviors.Given an operation sequence,i.e.,motion speed sequence of the welding torch,frequency features are firstly extracted by FFT and then reduced in dimension through PCA,which are finally routed into SVM for classification.The trained model demonstrates a 94.44%classification accuracy in the testing dataset.The successful pattern recognition in skilled welder operations should benefit to accelerate novice welder training.
基金supported by the National Natural Science Foundation of China(61403422,61273102)the Hubei Provincial Natural Science Foundation of China(2015CFA010)+1 种基金the Ⅲ Project(B17040)the Fundamental Research Funds for National University,China University of Geosciences(Wuhan)
文摘A facial expression emotion recognition based human-robot interaction(FEER-HRI) system is proposed, for which a four-layer system framework is designed. The FEERHRI system enables the robots not only to recognize human emotions, but also to generate facial expression for adapting to human emotions. A facial emotion recognition method based on2D-Gabor, uniform local binary pattern(LBP) operator, and multiclass extreme learning machine(ELM) classifier is presented,which is applied to real-time facial expression recognition for robots. Facial expressions of robots are represented by simple cartoon symbols and displayed by a LED screen equipped in the robots, which can be easily understood by human. Four scenarios,i.e., guiding, entertainment, home service and scene simulation are performed in the human-robot interaction experiment, in which smooth communication is realized by facial expression recognition of humans and facial expression generation of robots within 2 seconds. As a few prospective applications, the FEERHRI system can be applied in home service, smart home, safe driving, and so on.
基金This work was supported in part by the National Natural Science Foundation of China(61903028)the Youth Innovation Promotion Association,Chinese Academy of Sciences(2020137)+1 种基金the Lifelong Learning Machines Program from DARPA/Microsystems Technology Officethe Army Research Laboratory(W911NF-18-2-0260).
文摘In this paper,we present a novel data-driven design method for the human-robot interaction(HRI)system,where a given task is achieved by cooperation between the human and the robot.The presented HRI controller design is a two-level control design approach consisting of a task-oriented performance optimization design and a plant-oriented impedance controller design.The task-oriented design minimizes the human effort and guarantees the perfect task tracking in the outer-loop,while the plant-oriented achieves the desired impedance from the human to the robot manipulator end-effector in the inner-loop.Data-driven reinforcement learning techniques are used for performance optimization in the outer-loop to assign the optimal impedance parameters.In the inner-loop,a velocity-free filter is designed to avoid the requirement of end-effector velocity measurement.On this basis,an adaptive controller is designed to achieve the desired impedance of the robot manipulator in the task space.The simulation and experiment of a robot manipulator are conducted to verify the efficacy of the presented HRI design framework.
基金supported by the EU-Japan coordinated R&D project on“Culture Aware Robots and Environmental Sensor Systems for Elderly Support,”commissioned by the Ministry of Internal Affairs and Communications of Japan and EC Horizon 2020 Research and Innovation Programme(737858)financial supports from the Air Force Office of Scientific Research(AFOSR-AOARD/FA2386-19-1-4015)。
文摘With the increasing presence of robots in our daily life,there is a strong need and demand for the strategies to acquire a high quality interaction between robots and users by enabling robots to understand users’mood,intention,and other aspects.During human-human interaction,personality traits have an important influence on human behavior,decision,mood,and many others.Therefore,we propose an efficient computational framework to endow the robot with the capability of understanding the user’s personality traits based on the user’s nonverbal communication cues represented by three visual features including the head motion,gaze,and body motion energy,and three vocal features including voice pitch,voice energy,and mel-frequency cepstral coefficient(MFCC).We used the Pepper robot in this study as a communication robot to interact with each participant by asking questions,and meanwhile,the robot extracts the nonverbal features from each participant’s habitual behavior using its on-board sensors.On the other hand,each participant’s personality traits are evaluated with a questionnaire.We then train the ridge regression and linear support vector machine(SVM)classifiers using the nonverbal features and personality trait labels from a questionnaire and evaluate the performance of the classifiers.We have verified the validity of the proposed models that showed promising binary classification performance on recognizing each of the Big Five personality traits of the participants based on individual differences in nonverbal communication cues.
基金supported by the National Natural Science Foundation of China(61175057)the USTC Key-Direction Research Fund(WK0110000028)
文摘A more natural way for non-expert users to express their tasks in an open-ended set is to use natural language. In this case,a human-centered intelligent agent/robot is required to be able to understand and generate plans for these naturally expressed tasks. For this purpose, it is a good way to enhance intelligent robot's abilities by utilizing open knowledge extracted from the web, instead of hand-coded knowledge. A key challenge of utilizing open knowledge lies in the semantic interpretation of the open knowledge organized in multiple modes, which can be unstructured or semi-structured, before one can use it.Previous approaches used a limited lexicon to employ combinatory categorial grammar(CCG) as the underlying formalism for semantic parsing over sentences. Here, we propose a more effective learning method to interpret semi-structured user instructions. Moreover, we present a new heuristic method to recover missing semantic information from the context of an instruction. Experiments showed that the proposed approach renders significant performance improvement compared to the baseline methods and the recovering method is promising.
基金supported by the National Natural Science Foundation of China(61403215)the National Natural Science Foundation of Tianjin(13JCYBJC36600)the Fundamental Research Funds for the Central Universities
文摘Human-robot interaction(HRI) is fundamental for human-centered robotics, and has been attracting intensive research for more than a decade. The series elastic actuator(SEA) provides inherent compliance, safety and further benefits for HRI, but the introduced elastic element also brings control difficulties. In this paper, we address the stiffness rendering problem for a cable-driven SEA system, to achieve either low stiffness for good transparency or high stiffness bigger than the physical spring constant, and to assess the rendering accuracy with quantified metrics. By taking a velocity-sourced model of the motor, a cascaded velocity-torque-impedance control structure is established. To achieve high fidelity torque control, the 2-DOF(degree of freedom) stabilizing control method together with a compensator has been used to handle the competing requirements on tracking performance, noise and disturbance rejection,and energy optimization in the cable-driven SEA system. The conventional passivity requirement for HRI usually leads to a conservative design of the impedance controller, and the rendered stiffness cannot go higher than the physical spring constant. By adding a phase-lead compensator into the impedance controller,the stiffness rendering capability was augmented with guaranteed relaxed passivity. Extensive simulations and experiments have been performed, and the virtual stiffness has been rendered in the extended range of 0.1 to 2.0 times of the physical spring constant with guaranteed relaxed passivity for physical humanrobot interaction below 5 Hz. Quantified metrics also verified good rendering accuracy.
文摘With the increasing of the elderly population and the growing hearth care cost, the role of service robots in aiding the disabled and the elderly is becoming important. Many researchers in the world have paid much attention to heaRthcare robots and rehabilitation robots. To get natural and harmonious communication between the user and a service robot, the information perception/feedback ability, and interaction ability for service robots become more important in many key issues.
文摘This paper proposes a novel approach for physical human-robot interactions(pHRI), where a robot provides guidance forces to a user based on the user performance. This framework tunes the forces in regards to behavior of each user in coping with different tasks, where lower performance results in higher intervention from the robot. This personalized physical human-robot interaction(p2HRI) method incorporates adaptive modeling of the interaction between the human and the robot as well as learning from demonstration(LfD) techniques to adapt to the users' performance. This approach is based on model predictive control where the system optimizes the rendered forces by predicting the performance of the user. Moreover, continuous learning of the user behavior is added so that the models and personalized considerations are updated based on the change of user performance over time. Applying this framework to a field such as haptic guidance for skill improvement, allows a more personalized learning experience where the interaction between the robot as the intelligent tutor and the student as the user,is better adjusted based on the skill level of the individual and their gradual improvement. The results suggest that the precision of the model of the interaction is improved using this proposed method,and the addition of the considered personalized factors to a more adaptive strategy for rendering of guidance forces.
基金Supported by the National Natural Science Foundation of China(61602182)Science and Technology Planning Project of Guangzhou(201604046029)+4 种基金the Guangdong Natural Science Funds for Distinguished Young Scholar(2017A030306015)Science and Technology Planning Project of Guangdong Province(2017B010116001)Pearl River S&T Nova Program of Guangzhou(201710010059)Guangdong Special Projects(2016TQ03X824)the Fundamental Research Funds for the Central Universities(2017JQ009)
文摘The unmanned aircraft vehicles industry is in the ascendant while traditional interaction ways for an unmanned aerial vehicle(UAV)are not intuitive enough.It is difficult for a beginner to control a UAV,therefore natural interaction methods are preferred.This paper presents a novel interactive control method for a UAV through operator's gesture,and explores the natural interaction method for the UAV.The proposed system uses the leap motion controller as an input device acquiring the gesture position and orientation data.It is found that the proposed human-robot interface can track the movement of the operator with satisfactory accuracy.The biggest advantage of the proposed method is its capability to control the UAV by just one hand instead of a joystick.A series of experiments verified the feasibility of the proposed human-robot interface.The results demonstrate that non-professional operators can easily operate a remote UAV by just using this system.
文摘To build robots that engage in intuitive communication with people by natural language, we are developing a new knowledge representation called conceptual network model. The conceptual network connects natural language concepts with visual perception including color perception, shape perception, size perception, and spatial perception. In the implementation of spatial perception, we present a computational model based on spatial template theory to interpret qualitative spatial expressions. Based on the conceptual network model, our mobile robot can understand user's instructions and recognize the object referred to by the user and perform appropriate action. Experimental results show our approach promising.
基金supported by Zhejiang Provincial Natural Science Foundation of China(Grant Nos.LY22E050019 and LGG21E050015)Ningbo Public Welfare Research Program Foundation of China(Grant No.2023S066)+1 种基金the National Natural Science Foundation of China(Grant No.U21A20122)the JSPS Grant-in-Aid for Scientific Research(C)(Grant No.JP22K04010)。
文摘The humanoid robot head plays an important role in the emotional expression of human-robot interaction(HRI).They are emerging in industrial manufacturing,business reception,entertainment,teaching assistance,and tour guides.In recent years,significant progress has been made in the field of humanoid robots.Nevertheless,there is still a lack of humanoid robots that can interact with humans naturally and comfortably.This review comprises a comprehensive survey of state-of-the-art technologies for humanoid robot heads over the last three decades,which covers the aspects of mechanical structures,actuators and sensors,anthropomorphic behavior control,emotional expression,and human-robot interaction.Finally,the current challenges and possible future directions are discussed.
基金This work was supported by the National Natural Science Foundation of China(91748131,62006229 and 61771471)the Strategic Priority Research Program of Chinese Academy of Science(XDB32050106)the InnoHK Project.
文摘Human-robot object handover is one of the most primitive and crucial capabilities in human-robot collaboration.It is of great significance to promote robots to truly enter human production and life scenarios and serve human in numerous tasks.Remarkable progressions in the field of human-robot object handover have been made by researchers.This article reviews the recent literature on human-robot object handover.To this end,we summarize the results from multiple dimensions,from the role played by the robot(receiver or giver),to the end-effector of the robot(parallel-jaw gripper or multi-finger hand),to the robot abilities(grasp strategy or motion planning).We also implement a human-robot object handover system for anthropomorphic hand to verify human-robot object handover pipeline.This review aims to provide researchers and developers with a guideline for designing human-robot object handover methods.
基金This work was financially supported in the National Natural Science Foundation of China(Nos.61901064,52173274)National Key R&D Project from Minister of Science and Technology(2021YFA1201603)+3 种基金Natural Science Foundation of Chongqing(cstc2020jcyjmsxmX0397)the Strategic Priority Research Program of the Chinese Academy of Sciences(XDA16021101)Chongqing Special Key Project for Technological Innovation and Application Development(cstc2019jscxfxyd0262)Fundamental Research Funds for Central Universities(2020CDJ-LHZZ-077).
文摘Human–robot interface(HRI)electronics are critical for realizing robotic intelligence.Here,we report graphene-based dual-function acoustic transducers for machine learning-assisted human–robot interfaces(GHRI).The GHRI functions both an artificial ear through the triboelectric acoustic sensing mechanism and an artificial mouth through the thermoacoustic sound emission mechanism.The success of the integrated device is also attributed to the multifunctional laser-induced graphene,as either triboelectric materials,electrodes,or thermoacoustic sources.By systematically optimizing the structure parameters,the GHRI achieves high sensitivity(4500 mV Pa^(–1))and operating durability(1000000 cycles and 60 days),capable of recognizing speech identities,emotions,content,and other information in the human speech.With the assistance of machine learning,30 speech categories are trained by a convolutional neural network,and the accuracy reaches 99.66%and 96.63%in training datasets and test datasets.Furthermore,GHRI is used for artificial intelligence communication based on recognized speech features.Our work shows broad prospects for the development of robotic intelligence.
基金supported by the National Natural Science Foundation of China(NSFC)(Grant No.U20A20200)the Major Research(Grant No.92148204)+1 种基金the Guangdong Basic and Applied Basic Research Foundation(Grant Nos.2019B1515120076 and 2020B1515120054)the Industrial Key Technologies R&D Program of Foshan(Grant Nos.2020001006308and 2020001006496)。
文摘Owing to the constraints of unstructured environments,it is difficult to ensure safe,accurate,and smooth completion of tasks using autonomous robots.Moreover,for small-batch and customized tasks,autonomous operation requires path planning for each task,thus reducing efficiency.We propose a human-robot shared control system based on a 3D point cloud and teleoperation for a robot to assist human operators in the performance of dangerous and cumbersome tasks.The system leverages the operator’s skills and experience to deal with emergencies and perform online error correction.In this framework,a depth camera acquires the 3D point cloud of the target object to automatically adjust the end-effector orientation.The operator controls the manipulator trajectory through a teleoperation device.The force exerted by the manipulator on the object is automatically adjusted by the robot,thus reducing the workload for the operator and improving the efficiency of task execution.In addition,hybrid force/motion control is used to decouple teleoperation from force control to ensure that force and position regulation will not interfere with each other.The proposed framework was validated using the ELITE robot to perform a force control scanning task.
文摘Intuitive and efficient interfaces for human- robot interaction (HRI) have been a challenging issue in robotics as it is essential for the prevalence of robots supporting humans in key areas of activities. This paper presents a novel augmented reality (AR) based interface to facilitate human-virtual robot interaction. A number of human-virtual robot interaction methods have been for- mulated and implemented with respect to the various types of operations needed in different robotic applications. A Euclidean distance-based method is developed to assist the users in the interaction with the virtual robot and the spatial entities in an AR environment. A monitor-based visualization mode is adopted as it enables the users to perceive the virtual contents associated with different interaction methods, and the virtual content augmented in the real environment is informative and useful to the users during their interaction with the virtual robot. Case researches are presented to demonstrate the successful implementation of the AR-based HRI interface in planning robot pick-and-place operations and path following operations.