Semantic Scene Analysis

TNT members involved in this project:
Yuren Cong, M.Sc.
Prof. Dr.-Ing. Bodo Rosenhahn
Frederik Schubert, M. Sc.
Show all

Scene understanding is a challenging topic in computer vision, robots and artificial intelligence. Given one or more images, we want to infer what type of scene is shown in the image, what objects are visible, and physical or contextual relations between the observed objects. This information is important in many applications, such as robot navigation, image search, or surveillance applications.

Relations between objects can be given by physical information, such as "in front of " or "above". More abstractly, however, humans usually consider implicit relations between objects: For instance, both a table and the chairs around the table are "above" the floor. A human observer, on the other hand, would rather consider them to be a single group of objects. In other words, table and chairs define a relation which is more than just "in front of "or "next to". This type of implicitly defined additional information is what we consider as semantic or contextual information.

We estimate semantic information defined between objects in the scene, and construct a so-called scene graph. Scene graphs neatly represent all the objects within a scene, and allow to analyze the content of an image, or to even compare two images semantically, i.e. with respect to their contents and the relations between their objects.

 

Figure 1: Example of an observed scene (left) and the scene graph constructed from it (right).

If you are looking for an interesting topic for you bachelor or master thesis, please contact Wentong Liao or Hanno Ackermann.

If you are looking for a topic for your Master or Bachelor thesis, and you are interested in analyzing and modelling abstract problems, please do not hesitate to contact Wentong Liao or Hanno Ackermann. You are required to have good programming skills (MatLab, Python, Java or C++) and you need a good understanding of, for instance, linear algebra or statistics.

We provide a GUI implemented in Matlab for generating ground truth scene graphs and visualising the generated graphs.
It contains the manually labeled scene graph data of NYU_V2_dataset. For more details please refer to the readme in the file.

Show all publications
  • Yuren Cong, Michael Yang, Bodo Rosenhahn
    RelTR: Relation Transformer for Scene Graph Generation
    IEEE transactions on pattern analysis and machine intelligence (TPAMI), 2023
  • Yuren Cong, Jinhui Yi, Bodo Rosenhahn, Michael Yang
    SSGVS: Semantic Scene Graph-to-Video Synthesis
    Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2023
  • Yuren Cong, Wentong Liao, Hanno Ackermann, Michael Yang Yang, Bodo Rosenhahn
    Spatial-Temporal Transformer for Dynamic Scene Graph Generation
    International Conference on Computer Vision (ICCV), July 2021