Multimedia Information Processing Group

Masterseminar - Deep Learning in Computer Vision

General information

In this seminar important work of the last years in the field of Deep Learning will be discussed. The seminar is offered in cooperation with the workgroup "Intelligent Systems". The main focus is therefore on problems in the field of computer vision and active learning. For example, topics such as object detection, image description, inpainting, segmentation, but also novel learning methods are discussed.

Topics (Entries that are grayed out are already assigned.):

  1. End-to-End Object Detection with Transformers (link)
  2. Panoptic-DeepLab: A Simple, Strong, and Fast Baseline for Bottom-Up Panoptic Segmentation (link)
  3. On the Spectral Bias of Neural Networks (link)
  4. PixelCNN (link)
  5. Guided Variational Autoencoder for Disentanglement Learning (link)
  6. Towards continuous actions in continuous space and time using self-adaptive constructivism in neural XCSF (link)
  7. Off-Policy Deep Reinforcement Learning without Exploration (link)
  8. Weight uncertainty in neural networks (link)
  9. Automatic plankton quantification using deep features (link)
  10. Recurrent Neural Network for (Un-)supervised Learning of Monocular Video Visual Odometry and Depth (link)
  11. SuperPoint: Self-Supervised Interest Point Detection and Description (link)

 

Previous participation in the course Inf-NNDL: Neural networks and deep learning is required.

The assessment in this module consists of the preparation of a written elaboration, the review of two works of your fellow students, as well as a presentation with subsequent discussion.

The complete course will be held in English. The elaboration, reviews and presentation (incl. discussion) will be done in English.

If you have any questions, please contact Johannes Brünger or Simon Reichhuber

 

Written elaboration

Your first task is to read and understand the paper assigned to you. You should also consider quoted literature that is necessary for understanding. Then you should summarize the contents of your paper in your own words. We expect about 12-20 pages. Again, you should not limit yourself to your paper, but also include accompanying literature.

The written elaboration is submitted via the EasyChair conference system.

Deadline: To be announced | Submission-Link: To be announced

 

 

Reviews

In a review you should assess whether or not the author has fulfilled the task of summarising the given paper. Of course, you also have to consider the original paper. The reviews are also carried out via the EasyChair conference system. You will be assigned two papers, which you should edit as follows:

  • Summary Here you should briefly summarize the content of the elaboration. This shows that you as a reviewer have understood what it is all about.
  • Paper strength Here you should list what you liked about the elaboration. So if you had to defend your colleague, these would be your arguments why the elaboration is good.
  • Paper weaknesses Here you should list what you didn't like about the elaboration. So if you had to defend the rejection of the work, those would be your arguments why the elaboration is not good.
  • Preliminary evaluation A summary of your opinion with arguments as to whether the task was fulfilled or not.
  • Overall evaluation That summarizes again in a grade whether the elaboration has fulfilled the task or not. This recommendation will not be passed on to the author.

 

The reviews are part of the assessment in this seminar and are therefore included in your grade. We will evaluate how conscientiously you have reviewed the work of your fellow students.

Deadline: To be announced

 

Presentation

During the presentation, you should give a 20-minute presentation of the contents of the paper assigned to you. Afterwards we would like to have a short discussion about the contributions of the paper to Deep-Learning research. You as the speaker should lead this discussion and, if necessary, start it with questions you have prepared yourself.

Dates:To be announced