2021/06/02

My Talk in Confer Conference :   An Empirical Analysis of Transfer Learning for Generative Adversarial Networks 

Generative adversarial networks framework immensely attracted the machine learning community attention in the recent years. GANs have succeeded in generating realistic-looking data from noise and have countless applications. On the other hand, Transfer Learning is a smart way that aims to enhance machine learning models in the case of scarce data. The main intuition behind transfer learning is to train models on a plenty of data that is “not” our targeted one but has some similar attributes to it. Then transfer learning in models that contains single networks, such as CNNs, are implemented by sharing the first layers (from the input side) of the model between source and target domains and initialize other layers then retrain the model on our scarce target data. This allows the model to have a good start when learning on the target data and achieve better results than if we trained it from scratch on the target. Now, GANs framework consists of two networks, a generator and a discriminator, and a legit question here is: which network and which part of that network contains the transferable features and hence parameter? Should we fine-tune or freeze the shared parameters? I am trying to answer this specific and direct question by showing empirical results of a series of experiments that I designed based on conditional GANs framework. https://2021.confer.no/program/?tab=speakers



Server IP: 52.52.155.86