Английская Википедия:Human–robot collaboration
Human-Robot Collaboration is the study of collaborative processes in human and robot agents work together to achieve shared goals. Many new applications for robots require them to work alongside people as capable members of human-robot teams. These include robots for homes, hospitals, and offices, space exploration and manufacturing. Human-Robot Collaboration (HRC) is an interdisciplinary research area comprising classical robotics, human-computer interaction, artificial intelligence, process design, layout planning, ergonomics, cognitive sciences, and psychology.[1] [2]
Industrial applications of human-robot collaboration involve Collaborative Robots, or cobots, that physically interact with humans in a shared workspace to complete tasks such as collaborative manipulation or object handovers.[3]
Collaborative Activity
Collaboration is defined as a special type of coordinated activity, one in which two or more agents work jointly with each other, together performing a task or carrying out the activities needed to satisfy a shared goal.[5] The process typically involves shared plans, shared norms and mutually beneficial interactions.[6] Although collaboration and cooperation are often used interchangeably, collaboration differs from cooperation as it involves a shared goal and joint action where the success of both parties depend on each other.[7]
For effective human-robot collaboration, it is imperative that the robot is capable of understanding and interpreting several communication mechanisms similar to the mechanisms involved in human-human interaction.[8] The robot must also communicate its own set of intents and goals to establish and maintain a set of shared beliefs and to coordinate its actions to execute the shared plan.[5][9] In addition, all team members demonstrate commitment to doing their own part, to the others doing theirs, and to the success of the overall task.[9][10]
Theories Informing Human-Robot Collaboration
Human-human collaborative activities are studied in depth in order to identify the characteristics that enable humans to successfully work together.[11] These activity models usually aim to understand how people work together in teams, how they form intentions and achieve a joint goal. Theories on collaboration inform human-robot collaboration research to develop efficient and fluent collaborative agents.[12]
Belief Desire Intention Model
The belief-desire-intention (BDI) model is a model of human practical reasoning that was originally developed by Michael Bratman.[13] The approach is used in intelligent agents research to describe and model intelligent agents.[14] The BDI model is characterized by the implementation of an agent's beliefs (the knowledge of the world, state of the world), desires (the objective to accomplish, desired end state) and intentions (the course of actions currently under execution to achieve the desire of the agent) in order to deliberate their decision-making processes.[15] BDI agents are able to deliberate about plans, select plans and execute plans.
Shared Cooperative Activity defines certain prerequisites for an activity to be considered shared and cooperative: mutual responsiveness, commitment to the joint activity and commitment to mutual support.[9][16] An example case to illustrate these concepts would be a collaborative activity where agents are moving a table out the door, mutual responsiveness ensures that movements of the agents are synchronized; a commitment to the joint activity reassures each team member that the other will not at some point drop his side; and a commitment to mutual support deals with possible breakdowns due to one team member’s inability to perform part of the plan.[9]
Joint Intention Theory
Joint Intention Theory proposes that for joint action to emerge, team members must communicate to maintain a set of shared beliefs and to coordinate their actions towards the shared plan.[17] In collaborative work, agents should be able to count on the commitment of other members, therefore each agent should inform the others when they reach the conclusion that a goal is achievable, impossible, or irrelevant.[9]
Approaches to Human-Robot Collaboration
The approaches to human-robot collaboration include human emulation (HE) and human complementary (HC) approaches. Although these approaches have differences, there are research efforts to develop a unified approach stemming from potential convergences such as Collaborative Control.[18][19]
Human Emulation
The human emulation approach aims to enable computers to act like humans or have human-like abilities in order to collaborate with humans. It focuses on developing formal models of human-human collaboration and applying these models to human-computer collaboration. In this approach, humans are viewed as rational agents who form and execute plans for achieving their goals and infer other people's plans. Agents are required to infer the goals and plans of other agents, and collaborative behavior consists of helping other agents to achieve their goals.[18]
Human Complementary
The human complementary approach seeks to improve human-computer interaction by making the computer a more intelligent partner that complements and collaborates with humans. The premise is that the computer and humans have fundamentally asymmetric abilities. Therefore, researchers invent interaction paradigms that divide responsibility between human users and computer systems by assigning distinct roles that exploit the strengths and overcome the weaknesses of both partners.[18]
Key Aspects
Specialization of Roles: Based on the level of autonomy and intervention, there are several human-robot relationships including master-slave, supervisor–subordinate, partner–partner, teacher–learner and fully autonomous robot. In addition to these roles, homotopy (a weighting function that allows a continuous change between leader and follower behaviors) was introduced as a flexible role distribution.[20]
Establishing shared goal(s): Through direct discussion about goals or inference from statements and actions, agents must determine the shared goals they are trying to achieve.[18]
Allocation of Responsibility and Coordination: Agents must decide how to achieve their goals, determine what actions will be done by each agent, and how to coordinate the actions of individual agents and integrate their results.[18]
Shared context: Agents must be able to track progress toward their goals. They must keep track of what has been achieved and what remains to be done. They must evaluate the effects of actions and determine whether an acceptable solution has been achieved.[18]
Communication: Any collaboration requires communication to define goals, negotiate over how to proceed and who will do what, and evaluate progress and results.[18]
Adaptation and learning: Collaboration over time require partners to adapt themselves to each other and learn from one's partner both directly or indirectly.[4][18]
Time and space: The time-space taxonomy divides human-robot interaction into four categories based on whether the humans and robots are using computing systems at the same time (synchronous) or different times (asynchronous) and while in the same place (collocated) or in different places (non-collocated).[21][22]
Ergonomics: Human factors and ergonomics are one of the key aspects for a sustainable human-robot collaboration. The robot control system can use biomechanical models and sensors to optimize various ergonomic metrics, such as muscle fatigue.[4][23]
See also
- Industrial Robot
- Collaborative Robot
- Human-Robot Interaction
- Computer Supported Collaboration
- Collective Intentionality
References
External links
- https://www.euronews.com/2018/01/29/the-future-of-work-human-robot-collaboration
- https://www.kuka.com/en-us/technologies/human-robot-collaboration
- ↑ Шаблон:Cite journal
- ↑ Шаблон:Cite journal
- ↑ Шаблон:Cite journal
- ↑ 4,0 4,1 4,2 Шаблон:Cite journal
- ↑ 5,0 5,1 Шаблон:Cite journal
- ↑ Шаблон:Cite journal
- ↑ Hord, S. M. (1981). Working Together: Cooperation or Collaboration? Communications Services, Research and Development Center for Teacher Education, Education Annex 3.203, University of Texas, Austin, TX 78712-1288
- ↑ Шаблон:Cite book
- ↑ 9,0 9,1 9,2 9,3 9,4 Шаблон:Cite book
- ↑ Шаблон:Cite book
- ↑ Шаблон:Cite journal
- ↑ Шаблон:Cite arXiv
- ↑ Bratman, Michael (1987). Intention, Plans, and Practical Reason. Center for the Study of Language and Information.Шаблон:Page needed
- ↑ Шаблон:Cite book
- ↑ Mascardi, V., Demergasso, D., & Ancona, D. (2005). Languages for Programming BDI-style Agents: an Overview. WOA.Шаблон:Page needed
- ↑ Шаблон:Cite journal
- ↑ Шаблон:Cite journal
- ↑ 18,0 18,1 18,2 18,3 18,4 18,5 18,6 18,7 Шаблон:Cite journal
- ↑ Шаблон:Cite book
- ↑ Шаблон:Cite journal
- ↑ Шаблон:Cite journal
- ↑ Шаблон:Cite book
- ↑ Шаблон:Cite journal