Texture-less Object Pose Estimation for Assembly Process Monitoring using only Synthetic
Stefan Thalhammer, Automation and Control Institute, TU WIen
(Supervisor: Prof. Markus Vincze)
Object pose estimation is an important problem in robotics because it supports scene understanding and enables subsequent grasping and manipulation. Recently deep learning approaches for object detection and pose estimation advanced that far to be almost on par with classical feature- or template-based approaches. However, a still remaining downside is the huge amount of training data required to create models with sufficient capacity to produce strong results. Many methods, including modern deep learning approaches exploit known object models, often in the form of 3D models and meshes to create these. The trend to use only these models during training time is desired due to the benefits regarding data creation effort, especially in fast-paced, time driven manufacturing environment. While classical approaches translate well to real-life data, when trained only on synthetic data this does currently not apply for learning-based approaches. Deep networks still require real-world training data to be on par with classical methods, regarding object detection and pose estimation of texture-less objects in a cluttered environment. This thesis investigates and proposes improvements regarding synthetic data creation and regarding translation from synthetic to real-world data for simultaneous object detection and pose estimation. Additionally this thesis aims to push the state of the art for object detection of texture-less objects using said synthetically created data.