MeSSeR Mensch-unterstützte Synthese von Simulationsdaten für die Robotik

Goal

The methods of artificial intelligence (AI) and in particular machine learning (ML) are increasingly being used in robotics. Many approaches are based on the transferability of processes learned in simulation to reality (Sim2Real Transfer).

A particularly popular approach is domain randomisation, i.e. the simulation of several different environments, which is supposed to make the learned actions more robust against performance loss when transferring them to the real world due to their variance in uncertain variables, e.g. the exact friction coefficients or background textures.

This approach has two weaknesses: First, an application-specific simulation environment must be modelled, currently mostly from CAD models. Secondly, the variability of different influences has to be modelled in an application-specific way. These two steps cost a lot of time. The second step is mostly automated by varying too many factors at the same time and too much.

The project MeSSeR pursues an interaction of man and machine, which sharpens the variance specifically to the application and context, brings expert knowledge into the simulation, significantly shortens the provision time of an application-related simulation, and thus leads to a more reliable result more quickly.

MeSSeR investigates how human-assisted synthesis of simulation environments contributes to the successful transfer of learned actions into the real world and how this approach can be implemented through the use of augmented and virtual reality (AR/VR) as a result of direct human interaction with a simulation environment. This type of interaction ensures intuitive interaction and facilitates the spatial perception of the simulated environment and its variability model by humans.

More specifically, MeSSeR will design, implement and evaluate a tool chain that enables the creation of a simulation environment based on a scan of the environment by the robot, the semantic annotation or specific visual markup of the simulation by a human in AR/VR, the visual verification of the input variability, and the generation of data from the simulation randomly parameterised from this variability for machine learning of robot actions in the environment.

Persons
Publications
Keep the Human in the Loop: Arguments for Human Assistance in the Synthesis of Simulation Data for Robot Training

Liebers, Carina and Megarajan, Pranav and Auda, Jonas and Stratmann, Tim C. and Pfingsthorn, Max and Gruenefeld, Uwe and Schneegass, Stefan; Multimodal Technologies and Interaction; 2024

Look Over Here! Comparing Interaction Methods for User-Assisted Remote Scene Reconstruction

Liebers, Carina and Pfützenreuter, Niklas and Prochazka, Marvin and Megarajan, Pranav and Furuno, Eike and Löber, Jan and Stratmann, Tim C. and Auda, Jonas and Degraen, Donald and Gruenefeld, Uwe and Schneegass, Stefan; Extended Abstracts of the 2024 CHI Conference on Human Factors in Computing Systems; 2024

Partners
Universität Duisburg-Essen
www.uni-due.de

Duration

Start: 01.08.2021
End: 31.07.2023

Source of funding