Temporally Consistent Superpixels

TNT members involved in this project:
Prof. Dr.-Ing. Jörn Ostermann
Show all

Superpixel algorithms represent a very useful and increasingly popular preprocessing step for a wide range of computer vision applications. The grouping of spatially coherent pixels sharing similar low-level features leads to a major reduction of image primitives, which results in an increased computational efficiency for subsequent processing steps and allows for more complex algorithms computationally infeasible on pixel level.

 

Example Superpixel Segmentation

Figure 1: Superpixel segmentation of a still image

Especially for video applications, the usage of superpixels instead of raw pixel data is beneficial, as otherwise a vast amount of data has to be handled. Superpixel algorithms for still images tend to produce volatile and flickering superpixel contours when applied to video sequences. Moreover, by design the temporal connection between superpixels in successive images is omitted and consequently the same image regions in consecutive frames are not consistently labeled.

In this project, we aim to obtain a superpixel algorithm for arbitrarily long video sequences, which captures the temporal consistency inherent in the video volume as completely as possible and minimizes the flickering of the superpixel contours. The resulting superpixel segmentation could be used in applications like video segmentation or tracking.

 

Example Superpixel Video Segmentation

Figure 2: Top row: Original sequence with frame numbers.
Mid row: Subset of superpixels shown as color- coded labels.
Bottom row: Video segmentation based on the superpixel segmentation.

The new method is based on energy-minimizing clustering utilizing a hybrid clustering strategy in a multi-dimensional feature space. Thereby, color values are clustered globally in the whole video volume, while pixel positions are clustered locally on frame level. A sliding window is introduced to be able to process arbitrarily long video sequences and to allow for a certain degree of scene changes, e.g. gradual changes of illumination or color over time.

 

Demo Video

 

ERC Starting Grants

This project has been partially funded by the ERC within the starting grant Dynamic MinVIP.

Show recent publications only
  • Conference Contributions
    • Michael Yang, Matthias Reso, Jun Tang, Wentong Liao, Bodo Rosenhahn
      Temporally Object-based Video Co-Segmentation
      Advances in Visual Computing, The final publication is available at link.springer.com, Springer-Verlag, Las Vegas, NV, USA, December 2015
    • Matthias Reso, Jörn Jachalsky, Bodo Rosenhahn, Jörn Ostermann
      Fast Label Propagation for Real-Time Superpixels for Video Content
      IEEE International Conference on Image Processing (ICIP), Québec City, Canada, September 2015
    • Matthias Reso, Björn Scheuermann, Jörn Jachalsky, Bodo Rosenhahn, Jörn Ostermann
      Interactive Segmentation of High-Resolution Video Content using Temporally Coherent Superpixels and Graph Cut
      Advances in Visual Computing, The final publication is available at link.springer.com, Springer-Verlag, Las Vegas, NV, USA, December 2014
    • Matthias Reso, Jörn Jachalsky, Bodo Rosenhahn, Jörn Ostermann
      Superpixels for Video Content Using a Contour-based EM Optimization
      The 12th Asian Conference on Computer Vision (ACCV), Lecture Notes in Computer Science (LNCS), The final publication is available at link.springer.com, Springer Berlin/Heidelberg, Singapore, November 2014
    • Matthias Reso, Jörn Jachalsky, Bodo Rosenhahn, Jörn Ostermann
      Temporally Consistent Superpixels
      IEEE International Conference on Computer Vision (ICCV), Sydney, Australia, December 2013
    • Holger Meuel, Marco Munderloh, Matthias Reso, Jörn Ostermann
      Optical Flow Cluster Filtering for ROI Coding
      Proceedings of 30th Picture Coding Symposium (PCS), pp. 129-132, San Jose, California, USA, December 2013
    • Holger Meuel, Matthias Reso, Jörn Jachalsky, Jörn Ostermann
      Superpixel-based Segmentation of Moving Objects for Low Bitrate ROI Coding Systems
      IEEE International Conference on Advanced Video and Signal-Based Surveillance (AVSS), pp. 395-400, Krakow, Poland, August 2013
  • Journals
    • Matthias Reso, Jörn Jachalsky, Bodo Rosenhahn, Jörn Ostermann
      Occlusion-Aware Method for Temporally Consistent Superpixels
      IEEE Transactions on Pattern Analysis and Machine Intelligence, IEEE, Vol. 41, No. 6, pp. 1441-1454, June 2019
    • Holger Meuel, Marco Munderloh, Matthias Reso, Jörn Ostermann
      Mesh-based Piecewise Planar Motion Compensation and Optical Flow Clustering for ROI Coding
      APSIPA Transactions on Signal and Information Processing, Vol. 4, October 2015