Open Questions about Generative Adversarial Networks
This time we are trying something new: instead of discussing another GAN paper, I will present some topics I’d like to see research community write GAN papers on.
Also a little kick-off on modern GANs for a warmup.
1) What are the trade-offs between GANs and other generative models?
2) What sorts of distributions can GANs model?
3) How can we Scale GANs beyond image synthesis?
4) What can we say about the global convergence of the training dynamics?
5) How should we evaluate GANs and when should we use them?
6) How does GAN training scale with batch size?
7) What is the relationship between GANs and adversarial examples?
Speaker: Rauf Kurbanov.
Presentation language: English.
Date and time: April 17th, 6:30-8:00 pm.
Location: Times, room 204.
Videos from previous seminars are available at http://bit.ly/MLJBSeminars
- About seminars
17 April 2019Open Questions about Generative Adversarial Networks
10 April 2019Adaptive Sampled Softmax with Kernel Based Sampling
3 April 2019Visual Object Tracking
13 March 2019Knowledge Distillation
6 March 2019BERT