IEEE/ISPRS workshop on Multi-Sensor Fusion for Outdoor Dynamic Scene Understanding
28th of June 2014, Columbus, OH, USA
in conjunction with IEEE Conference on Computer Vision and Pattern Recognition
Aims and scope

The fusion of dynamic 2D/3D multi-sensor data has not been extensively explored in the vision community. MSF 2014 workshop will encourage interdisciplinary interaction and collaboration from computer vision, remote sensing, robotics and photogrammetry communities, that will serve as a forum for research groups from academia and industry. There has been ever increasing amount of multi-sensory data collections, e.g. KITTI benchmark (stereo+laser), from different platforms, such as autonomous vehicles, surveillance cameras, UAVs, planes and satellites. With emphasis on multi-sensory data, we hope this workshop will foster new research direction in the computer vision community.

The event is organized with ISPRS WG III/3 "Image Sequence Analysis".

Submissions are invited from all areas of computer vision and image analysis relevant for, or applied to, scene understanding. This workshop will focus on multi-sensory dynamic spatial information fusion from stereo sequences, visual and infared sequences, video and lidar sequences, stereo and laser sequences, etc. Indoor applications are also welcome to submit.

Topics of interest include, but are not limited to:

  • Object detection and tracking
  • Motion segmentation
  • Image sequence registration
  • Dynamic scene understanding
  • Security/surveillance
  • Vision based robot/drone navigation
  • Multi-modal fusion of sensory information
  • Multi-scale fusion
  • Low-level processing of different sensors
  • 3D scanning sensors, laser and lidar systems
  • 3D object recognition and classification
  • Large scale issues
  • Simultaneous localization and mapping

All manuscripts will be subject to a double-blind review process. The proceedings will be published by IEEE on the DVD proceedings of CVPR 2014, and will be available in IEEE Xplore.

Program
  • 8:30-8:40 Welcome and overview by the organizers
  • 8:40-9:00 Oral 1 (15min talk + 5min discussion) “Integrating LIDAR Range Scans and Photographs with Temporal Changes"
  • 9:00-9:20 Oral 2 (15min talk + 5min discussion) "Guided Depth Upsampling via A Cosparse Analysis Model"
  • 9:20-10:15 Keynote (45min talk + 10min discussion)
  • 10:15-10:45 Coffee break
  • 10:45-11:10 Poster Spotlight (5min each)
  • 11:10-12:00 Poster Session
Accepted Paper List
  • 06 Alignment of 3D Building Models with Satellite Images Using Extended Chamfer Matching
  • 09 Active Planning, Sensing and Recognition Using a Resource-Constrained Discriminant POMDP
  • 13 Integrating LIDAR Range Scans and Photographs with Temporal Changes
  • 15 Guided Depth Upsampling via A Cosparse Analysis Model
  • 19 Frame Rate Fusion and Upsampling of EO/LIDAR Data for Multiple Platforms
  • 24 Feature Regression for Multimodal Image Analysis
  • 27 2D/3D Sensor Exploitation and Fusion for Enhanced Object Detection
Important dates
  • Submission site open: 18th of February 2014
  • Full paper submission : 26th of March 2014, 23:59 (GMT)
  • Notification of acceptance: 21st of April 2014
  • Camera-ready paper: 6th of May 2014
  • Workshop (half day): 28th of June 2014
Keynote Speaker
Affiliations