Master Seminar "Visual Feature Learning in Autonomous Driving"
Organizer: Emec Ercelik, Burcu Karadeniz, Sina Shafaei
Contact: burcu.karadeniz(at)tum.de
Modul: IN2107
Registration: Via Matching System
Type: MasterSeminar
Semester: Summer Semester 2019
ECTS: 5.0/4.0
Time & Location: 9:00 - 11:00 & 03.07.011
News (Registration)
- The presentation dates are updated [06.05.2019]. Don't forget to refresh the topics link below, to see the updated pdf with assigned topics.
- The presentation dates will be changed. Please check the dates next week. (We will enter a note here when we update.)
- You can find the assigned topics here.
- Presentation of the introductory session
- First session of the seminar will be on 03.05.2019, Friday in room 03.07.011.
- Please provide a CV and a motivation letter that states your achievements and aims related to this seminar until the end of 12.02.2019 ( Please send your documents to "burcu.karadeniz(at)tum.de" with the subject line "Seminar: Visual Feature Learning in Autonomous Driving" ).
- There is no scheduled preliminary session.
Content
The ultimate aim of autonomous driving problem is to design self-driving cars that safely and comfortably navigate on the roads without human intervention. Since visual data contains rich information about the environment, this type of data can be utilized for autonomous driving tasks.
In this seminar course, students will investigate different autonomous driving related tasks that involve visual data processing methods. The focus of the given topics are visual feature extraction and learning methods used in the intersection of computer vision and autonomous driving domain.
Topics | Presentation (Date) | Presentation | |
---|---|---|---|
- Recurrent Neural Networks for object detection Recurrent neural networks have been recently used to detect patterns inside an image as well as patterns in between successive images. This has a big importance, since objects in a scene construct a strong context in autonomous driving domain as well as containing strong links with the objects in the following scenes. In this topic, students are expected to work on recurrent neural network architectures used for object detection in autonomous driving domain.
| 28.06.2019 | ||
- Sensor Fusion methods for object detection Fusing several sensory data is a beneficial practice, since in theory it is possible to collect useful information from a sensor while another is occluded and useless in different conditions. Students working in this topic are expected to review sensor fusion methods and their results for object detection problem in autonomous driving tasks. | 28.06.2019 | ||
- Effects of different arrangements in visual input data on object detection accuracy Properties of dataset play a very important role on the results of supervised learning. There are several tricks used by researchers such as resizing images, applying augmentation and normalization on data. In this topic, students are expected to review such methods that are used to improve results of supervised learning for object detection with the focus in autonomous driving. | 05.07.2019 | ||
- Learning types for object detection in autonomous driving There are different learning types in the literature used for object detection, which require different types of datasets and follow different procedures. In this topic, students are expected to review learning types such as supervised learning, reinforcement learning, semi-supervised learning, and weakly-supervised learning for object detection problem in autonomous driving domain. | 05.07.2019 | ||
- Learned visual features for depth estimation Depth estimation is a well-researched area in classical computer vision world. With increasing interest for autonomous driving, depth estimation with deep learning methods are in interest. In this topic, students are expected to compare both classical and deep learning methods for depth estimation in autonomous driving domain, and report both strong and weak aspects of each approach. | 12.07.2019 | ||
- Learned visual features for feature matching Feature matching is one the most important step in pose estimation. Today when we talk about autonomous driving and self- localization, we talk about end-to-end networks. But, are we there yet? In this topic, students are expected to compare both classical and deep learning methods for feature matching in autonomous driving domain, and report both strong and weak aspects of each approach.
| 12.07.2019 | ||
- Learned visual features for multiple motion estimation Structure from motion algorithms assume that application environment is static. However, any autonomous driving scenario has to deal with dynamic objects such as, pedestrians, vehicles, and any other object in motion. In this topic, students are expected to compare matrix factorization method for multiple motion estimation and relevant deep learning methods, and report both strong and weak aspects of each approach. | 05.07.2019 | ||
- Identity Recognition in Cabin Identifying the driver and the passengers is a key feature for personalizing the comfort functions of car. There are various methods in computer vision for this purpose and the aim of this topic is to perform an strong literature review on the methods which are applicable for in cabin environments considering the occlusion and illumination. | 19.07.2019 | ||
- Multimodality in Emotion Recognition Systems There are various features related to the behavior of the driver in cabin (acceleration, steering wheel usage, etc.) which have impact on identifying the current emotional status of the driver. The aim of this topic is to study all of the relevant features and modalities and the fusion approaches as the main input for a multimodal emotion recognition system. | 19.07.2019 | ||