Direkt zum Inhalt springen
Machine Learning for Robotics
TUM Department of Informatics
Technical University of Munich

Technical University of Munich

Menu

Links

Informatik IX

Professorship for Machine Learning for Robotics

Smart Robotics Lab

Boltzmannstrasse 3
85748 Garching info@vision.in.tum.de

Follow us on:
SRL  CVG  DVL 


Software & Datasets

supereight (Imperial College)

We release our reference implementation of Efficient Octree-Based Volumetric SLAM Supporting Signed-Distance and Occupancy Mapping (see publications below) under the BSD 3-clause license:

See https://bitbucket.org/smartroboticslab/supereight-public


Export as PDF, XML, TEX or BIB

2019
Conference and Workshop Papers
[]Adaptive-resolution octree-based volumetric SLAM (E Vespa, N Funk, PH Kelly and S Leutenegger), In 2019 International Conference on 3D Vision (3DV), 2019.  [bibtex]
2018
Journal Articles
[]Efficient octree-based volumetric SLAM supporting signed-distance and occupancy mapping (E Vespa, N Nikolov, M Grimm, L Nardi, PH Kelly and S Leutenegger), In IEEE Robotics and Automation Letters, IEEE, volume 3, 2018.  [bibtex]
Powered by bibtexbrowser
Export as PDF, XML, TEX or BIB

InteriorNet (Imperial College)

A dataset that contains 20M images created by pipeline: (A) We collect around 1 million CAD models provided by world-leading furniture manufacturers. These models have been used in the real-world production. (B) Based on those models, around 1,100 professional designers create around 22 million interior layouts. Most of such layouts have been used in real-world decorations. (C) For each layout, we generate a number of configurations to represent different random lightings and simulation of scene change over time in daily life. (D) We provide an interactive simulator (ViSim) to help for creating ground truth IMU, events, as well as monocular or stereo camera trajectories including hand-drawn, random walking and neural network based realistic trajectory. (E) All supported image sequences and ground truth.

See https://interiornet.org/


Export as PDF, XML, TEX or BIB

2018
Preprints
[]InteriorNet: Mega-scale multi-sensor photo-realistic indoor scenes dataset (W Li, S Saeedi, J McCormac, R Clark, D Tzoumanikas, Q Ye, Y Huang, R Tang and S Leutenegger), In arXiv preprint arXiv:1809.00716, 2018.  [bibtex]
Powered by bibtexbrowser
Export as PDF, XML, TEX or BIB

OKVIS (ETH Zurich)

We are pleased to announce the open-source release of OKVIS: Open Keyframe-based Visual Inertial SLAM under the terms of the BSD 3-clause license. OKVIS tracks the motion of an assembly of an Inertial Measurement Unit (IMU) plus N cameras (tested: mono, stereo and four-camera setup) and reconstructs the scene sparsely. This is the Author’s implementation of the publications below. There is currently no loop-closure detection / optimisation included, but we are working on it.

Copyright © 2016, Autonomous Systems Lab / ETH Zurich Software authors and contributors: Stefan Leutenegger, Andreas Forster, Paul Furgale, Pascal Gohl, and Simon Lynen

To obtain the ROS-Version, follow the instructions here: http://ethz-asl.github.io/okvis_ros/ This is ready to be used with a Skybotix VI-Sensor or to process ROS bags.

We also provide a non-ROS version to use as a generic CMake library, which includes some minimal examples to process datasets: http://ethz-asl.github.io/okvis/


Export as PDF, XML, TEX or BIB

2015
Journal Articles
[]Keyframe-based visual–inertial odometry using nonlinear optimization (S Leutenegger, S Lynen, M Bosse, R Siegwart and P Furgale), In The International Journal of Robotics Research, SAGE Publications, volume 34, 2015.  [bibtex]
Powered by bibtexbrowser
Export as PDF, XML, TEX or BIB

Software and Datasets by the Dyson Robotics Lab at Imperial College

We were involved in development and release of the dense SLAM system ElasticFusion, the semantic SLAM system SemanticFusion, and datasets such as SceneNet RGB-D. They are all available here: http://www.imperial.ac.uk/dyson-robotics-lab/downloads/

BRISK 2 (ETH Zurich and Imperial College)

NEWS: BRISK Version 2 with shorter descriptors, higher speed and compatibility with OpenCV version 3 is available here: brisk-2.0.2.zip

This is the Author’s implementation of BRISK: Binary Robust Invariant Scalable Keypoints. Various (partly unpublished) extensions are provided. In particular, the default descriptor consists of 48 instead of 64 bytes.

Note that the codebase that you are provided here is free of charge and without any warranty. This is bleeding edge research software. The 3-clause BSD license (see file LICENSE) applies. Supported operating systems: Linux or MacOS X, tested on Ubuntu 14.04 and El Capitan. Vector instructions (SSE2 and SSSE3 or NEON) must be available. Depends on OpenCV 2.4 or newer. OpenCV 3 is compatible, however not extensively tested and the demo application is somewhat limited in functionality. See README.md for further instructions on how to build and use the library and demo application.


Export as PDF, XML, TEX or BIB

2014
PhD Thesis
[]Unmanned solar airplanes: Design and algorithms for efficient and robust autonomous operation (S Leutenegger), PhD thesis, ETH Zurich, 2014.  [bibtex]
2011
Conference and Workshop Papers
[]BRISK: Binary robust invariant scalable keypoints (S Leutenegger, M Chli and RY Siegwart), In 2011 International conference on computer vision, 2011.  [bibtex]
Powered by bibtexbrowser
Export as PDF, XML, TEX or BIB

Original BRISK (ETH Zurich)

Still available is the original author’s implementation of the ICCV'11 paper: brisk.zip. Requires OpenCV 2.1-2.3.

Rechte Seite

Informatik IX

Professorship for Machine Learning for Robotics

Smart Robotics Lab

Boltzmannstrasse 3
85748 Garching info@vision.in.tum.de

Follow us on:
SRL  CVG  DVL