Title

SEMI-SUPERVISED GENERATIVE GUIDANCE FOR ZERO-SHOT SEMANTIC SEGMENTATION

Abstract

Abstract

Collecting fully-annotated data to train deep networks for semantic image segmentation can be prohibitively costly due to difficulty of making pixel-by-pixel annotations. In this context, zero-shot learning based formulations relax the labelled data requirements by enabling the recognition of classes without training examples. Recent studies on zero-shot learning of semantic segmentation models, however, highlight the difficulty of the problem. This thesis proposes techniques towards improving zero-shot generalization to unseen classes by exploiting unlabelled images. The main goal is to train a generative image model conditioned on zero-shot segmentation predictions in a semi-supervised manner, and use the feedback from the generative model to the segmentation based conditioning inputs as a guidance. In this manner, the zero-shot segmentation model is encouraged to make more accurate predictions so that it provides more informative conditional inputs to the generative model. To further improve the training dynamics of the generative model, the generative model is trained in the feature space provided by the early convolutional layer(s) of the segmentation architecture, overall forming a high-level to low-level generative feedback loop. Following the state-of-the-art, the approach is experimentally evaluated using the COCO-Stuff dataset.

Supervisor(s)

Supervisor(s)

ABDULLAH CEM ONEM

Date and Location

Date and Location

2022-01-13 09:30:00

Category

Category

MSc_Thesis