Direkt zum Inhalt springen
Machine Learning for Robotics
TUM Department of Informatics
Technical University of Munich

Technical University of Munich

Menu

Links

Informatik IX

Professorship for Machine Learning for Robotics

Smart Robotics Lab

Boltzmannstrasse 3
85748 Garching info@srl.in.tum.de

Follow us on:
SRL  CVG  DVL 


Student Projects

Please find a list of projects (BSc/MSc thesis) currently on offer below:

Learning to walk on uneven terrain: elevation maps for bipedal walking

Supervisor(s) and Contact

Context

The Chair of Applied Mechanics (Prof. Rixen) at TUM has a humanoid (see Figure) robot and bipedal walker. Ideally, these can perceive the potentially uneven terrain in front of them, in order to safely walk over it. Within this project, we would like to explore incorporating locally perceived elevation maps as (additional) inputs to learned gait control policies.

Dynamic Neural Object Reconstruction in Learned Dense SLAM

Supervisor(s) and Contact

Context

Learning-based SLAM has made significant progress in recent years, due to the great power of deep neural networks. However, most of the existing methods focus on static scenes. The pose and shape of dynamic objects are also critical for understanding the scene and benefiting the subsequent automation tasks. This project focus on the pose and shape estimation of the dynamic objects in a learned dense SLAM system. Along with the recurrent iterative updates of camera pose and pixel-wise depth, we aim to also optimize the pose and shape of the object with implicit neural representations.

Real-time 3D Completion and Semantic Reconstruction

Supervisor(s) and Contact

Context

This project focuses on the 3D semantic reconstruction using RGB-D camera. As we all know, the depth sensors in the RGB-D cameras usually have invalid depth measurements on shiny, glossy, bright, or distant surfaces. Besides, it is tricky to move the camera to cover the whole scenario for a complete and exquisite reconstruction. To this end, we aim to use the deep neural networks for learning prior knowledge of different scenes, and completing the missing structures incrementally in a real-time SLAM system.

LiDAR-Inertial-Camera Volumetric Dense Mapping

Supervisor(s) and Contact

Context

3D LiDAR, IMU, and Camera have their own strength and shortages for localization and mapping tasks. This project targets to develop a real-time LiDAR-Inertial-Camera mapping system by trying to utilize the best of each sensor modality for robust mapping under challenging scenarios, like high-dynamic ego-motion, bad illumination, and challenging weather conditions. Based on the robust and efficient filter-base LIC-Fusion odometry, we aim to develop a volumetric mapping back-end for high-quality reconstruction.

Localize Monocular Camera in Large-scale LiDAR Map

Supervisor(s) and Contact

Context

LiDAR and cameras are widely applied in robotic applications, e.g. mixed reality, dense mapping, autonomous driving. While it is well studied to localize camera w.r.t. a visual feature map, global monocular camera localization in LiDAR-maps remains fairly unexplored. This project aims to narrow the gap between LiDAR point could map and images by learning deep features in shared embedding space, and localize the camera in a large-scale LiDAR point cloud map accurately.

Learned plane based visual-inertial SLAM and AR applications

Supervisor(s) and Contact

Context

Monocular SLAM system suffers from scale agnostic, while the visual-inertial system with the aid of IMU is competent to estimate metric 6DoF poses. Since structural planes are essential in AR (augmented reality) applications and informative, it is worth recovering 3D planes for building the layout of the environment. This project targets to develop a monocular visual-inertial SLAM system that leverages deep neural networks to detect and predicted 3D planes with the aid of IMU and incorporate planes into the conventional geometric bundle adjustment.

Dynamic Object-level SLAM in Neural Radiance Field

Supervisor(s) and Contact

Context

Object-level SLAM has attracted a lot of attention and made tremendous progress recently, where each object in the scene can be represented in an individual sub-map. The Smart Robotics Lab has developed one of the first dynamic object-level SLAM systems that can simultaneously segment, track, and reconstruct both static and moving objects in the scene. More recently, the neural radiance field has caught the attention of the vision community, and has adopted it to the object-level mapping framework. However, the object and camera poses are given and a tightly-coupled tracking component is lacking and prevent such work from real-world applications.

Dense Monocular implicit SLAM

Supervisor(s) and Contact

Context

Recently, the neural radiance field has caught the attention of the vision community and many extension works have been proposed, among which iMAP has proposed to use this implicit map representation in a SLAM system. However, it requires depth input to perform tracking and mapping. More recently, DROID-SLAM has proposed a new recurrent iterative updating way to achieve reliable tracking and semi-dense mapping system in a monocular camera setting. In this project, we would like to explore a tight integration of NerF and DROID-SLAM to achieve a dense monocular SLAM system, ideally working even in a dynamic environment.

Rechte Seite

Informatik IX

Professorship for Machine Learning for Robotics

Smart Robotics Lab

Boltzmannstrasse 3
85748 Garching info@srl.in.tum.de

Follow us on:
SRL  CVG  DVL