Counterfactual Generative Networks
Neural networks are prone to learning shortcuts – they often model simple correlations, ignoring more complex ones that potentially generalize better.
For example, a real-world dataset will typically depict cows on green pastures in most images. The most straightforward correlation a classifier can learn to predict the label ”cow” is hence the connection to a green, grass-textured background.
One central concept in causality states that a causal generative process is composed of autonomous modules that do not influence each other. Each of those modules controls a single factor of variation (FoV), in our example background and image of the animal itself.
We want to be able to produce counterfactual images, i.e., images of unseen combinations of FoVs.
In this seminar, we introduce a technique to decompose the image generation process trained without direct supervision.
Counterfactual Generative Network (CGN) disentangles object shape, object texture, and background allowing the generation of counterfactual images.
Further, we show that the counterfactual images can improve out-of-distribution robustness on the original classification task, despite being synthetic.
Finally yet importantly the CGNs can be trained efficiently on a single GPU, exploiting common pre-trained models as inductive biases.
Докладчик: Рауф Курбанов.
Язык доклада: английский.
Дата и время: 2-е февраля, 20:00.
Meeting ID: 430 117 051