Learning Object Appearance from Occlusions using Structure and Motion Recovery
The 11th Asian Conference on Computer Vision (ACCV 2012)
Visual effect creation as used in movie production often require structure and motion recovery and video segmentation.
Both techniques are essential to integrate virtual objects between scene elements.
In this paper, a new method for video segmentation is presented.
It incorporates 3D scene information from the structure and motion recovery.
By connecting and evaluating discontinued feature tracks, occlusion and reappearance information is obtained during sequential camera and scene estimation.
The foreground is characterized as image regions which temporarily occlude the rigid scene structure.
The scene structure is represented by reconstructed object points.
Their projections onto the camera images provide the cues for regions classified as foreground or background.
The knowledge of occluded parts of a connected feature track is used to feed the object segmentation which crops the foreground image regions automatically.
Two applications are presented: the occlusion of integrated virtual objects and the blurred background effect.
Several demonstrations on official and self-made data show very realistic results in augmented reality.