Generative Adversarial Network (GAN) is widely used for instance generation in multiple fields. It
models a zero-sum game between a generator network and a discriminator network while using theJensen-Shannon divergence between the target distribution and the generated one as the optimization goal. We propose an alternative explanation of such a framework using a binary-input channel.We also show that our proposed goal is equivalent to the original Jensen-Shannon divergence while enforcing a tighter performance bound. We then introduce an additional trainable parameter πwith only few modification on the vanilla implementation of GAN. We further experiment with ourproposed variant of GAN on MNIST and CIFAR-10, producing ∼ 10 less on the Fr´echet inception distance and ∼ 0.5 more on the inception score.
Use this login method if you
don't
have an
@illinois.edu
email address.
(Oops, I do have one)
IDEALS migrated to a new platform on June 23, 2022. If you created
your account prior to this date, you will have to reset your password
using the forgot-password link below.