Title

Learning Furniture Assembly with Reinforcement Learning

Abstract

Abstract

Robots are stepping out of their cages and joining humans as teammates for collaborative tasks in factories, warehouses, and even stores. The rise of such collaborative robots (cobots) signifies a paradigm shift in human-robot interaction, enabling robots to operate alongside humans in shared workspaces across diverse domains. While cobots excel in repetitive tasks under human supervision, their capabilities in dynamic environments or with novice human workers remain limited. Many collaborative tasks necessitate cobots to understand a task, generate a plan, help a human worker follow the plan, detect errors in a plan execution and guide the human worker towards correcting them.

In this thesis, we look into collaborative furniture assembly and employ deep reinforcement learning based solutions for assembly planning. We develop a novel reward mechanism that enables cobots to learn assembly plans with minimal supervision, requiring only the final product configuration as input. This facilitates easy adaptation to novel furniture designs and reduces dependence on detailed instructions. Furthermore, we introduce a causal model of the assembly process to enable online error detection and correction. This empowers cobots to identify assembly inconsistencies and guide human workers in rectifying them, promoting robust and efficient collaborative furniture assembly.

Supervisor(s)

Supervisor(s)

OZGUR ASLAN

Date and Location

Date and Location

2024-07-05 10:30:00

Category

Category

MSc_Thesis