For years, deep learning methods have proved to be really effective. Major breakthroughs have been made thanks to those kind of models in the past few years, in computer vision, medicine, robotic, etc. Yet, to reach such performances, deep learning requires a lot of data to train on. Data augmentation techniques focus on generating new samples from datasets in order to create more training examples and improve performances of already existing models. In this work, our goal was first to reproduce the work from \cite{tran2017bayesian}, that uses generative adversarial networks to create new training examples during the training. Then we compare the performances with state-of-the-art augmentation techniques introduced in \cite{hauberg2016dreaming}. Finally, we try to combine those techniques in different ways.