Large Scale GAN Training for High Fidelity Natural Image Synthesis
At the seminar we will discuss recent paper "Large Scale GAN Training for High Fidelity Natural Image Synthesis".
Paper abstract: "Despite recent progress in generative image modeling, successfully generating high-resolution, diverse samples from complex datasets such as ImageNet remains an elusive goal. To this end, we train Generative Adversarial Networks at the largest scale yet attempted, and study the instabilities specific to such scale. We find that applying orthogonal regularization to the generator renders it amenable to a simple "truncation trick", allowing fine control over the trade-off between sample fidelity and variety by truncating the latent space. Our modifications lead to models which set the new state of the art in class-conditional image synthesis. When trained on ImageNet at 128x128 resolution, our models (BigGANs) achieve an Inception Score (IS) of 166.3 and Frechet Inception Distance (FID) of 9.6, improving over the previous best IS of 52.52 and FID of 18.65."
Speaker: Aleksei Shpilman.
Presentation language: Russian.
Date and time: February 13th, 18:30-20:00.
Location: Times, room 204.
Videos from previous seminars are available at http://bit.ly/MLJBSeminars
- About seminars
20 February 2020Визуализация биологических данных методами сокращения размерности
17 February 2020Рассказ о конференции WSDM'2020
13 February 2020Junction Tree Variational Autoencoder
10 February 2020T5: Text-to-Text Transfer Transformer
6 February 2020Обзор работ по машинному обучению в биологии и медицине за 2019 год