[paper] CGAN

    xiaoxiao2021-03-25  151

    Conditional Generative Adversarial Nets Paper: https://arxiv.org/abs/1411.1784

    we introduce the conditional version of generative adversarial nets, which can be constructed by simply feeding the data, y, we wish to condition on to both the generator and discriminator.

    We show that this model can generate MNIST digits conditioned on class labels.

    We also illustrate how this model could be used to learn a multi-modal model, and provide preliminary examples of an application to image tagging in which we demonstrate how this approach can generate descriptive tags which are not part of training labels.

    Introduction

    by conditioning the model on additional information it is possible to direct the data generation process.

    Multi-modal Learning For Image Labelling

    Conditional Adversarial Nets

    Generative Adversarial Nets

    Conditional Adversarial Nets

    Generative adversarial nets can be extended to a conditional model if both the generator and discriminator are conditioned on some extra information y .

    The objective function of a two-player minimax game would be as

    minGmaxDV(D,G)=Expdata(x)[logD(xy)] Ezpz(z)[log(1D(G(zy)))]

    Experimental Results

    Unimodal

    We trained a conditional adversarial net on MNIST images conditioned on their class labels, encoded as one-hot vectors.

    Multimodal

    Future Work

    转载请注明原文地址: https://ju.6miu.com/read-4419.html

    最新回复(0)