Papers Read on AI

Papers Read on AI header image 1
June 29, 2022  

Demystifying MMD GANs

June 29, 2022

We investigate the training and performance of generative adversarial networks using the Maximum Mean Discrepancy (MMD) as critic, termed MMD GANs. As our main theoretical contribution, we clarify the situation with bias in GAN loss functions raised by recent work: we show that gradient estimators used in the optimization process for both MMD GANs and Wasserstein GANs are unbiased, but learning a discriminator based on samples leads to biased gradients for the generator parameters. We also discuss the issue of kernel choice for the MMD critic, and characterize the kernel corresponding to the energy distance used for the Cramer GAN critic.

2018: Mikolaj Binkowski, Danica J. Sutherland, M. Arbel, A. Gretton

Gradient, MikuMikuDance

https://arxiv.org/pdf/1801.01401v5.pdf