This time we are trying something new: instead of discussing another GAN paper, I will present some topics I’d like to see research community write GAN papers on.
Also a little kick-off on modern GANs for a warmup.
1) What are the trade-offs between GANs and other generative models?
2) What sorts of distributions can GANs model?
3) How can we Scale GANs beyond image synthesis?
4) What can we say about the global convergence of the training dynamics?
5) How should we evaluate GANs and when should we use them?
6) How does GAN training scale with batch size?
7) What is the relationship between GANs and adversarial examples?
Speaker: Rauf Kurbanov.
Presentation language: English.
Date and time: April 17th, 6:30-8:00 pm.
Location: Times, room 204.
Videos from previous seminars are available at http://bit.ly/MLJBSeminars