Reversible GANs for Memory-efficient Image-to-Image Translation

Published in CVPR, 2019

Abstract

The Pix2pix and CycleGAN losses have vastly improved the qualitative and quantitative visual quality of results in image-to-image translation tasks. We extend this framework by exploring approximately invertible architectures which are well suited to these losses. These architectures are approximately invertible by design and thus partially satisfy cycle-consistency before training even begins. Furthermore, since invertible architectures have constant memory complexity in depth, these models can be built arbitrarily deep. We are able to demonstrate superior quantitative output on the Cityscapes and Maps datasets at near constant memory budget.

  1. @article{vanderouderaaW2019a,
      author = {van der Ouderaa, Tycho F. A. and Worrall, Daniel E.},
      title = {Reversible GANs for Memory-efficient Image-to-Image Translation},
      journal = {CoRR},
      volume = {abs/1902.02729},
      year = {2019},
      url = {http://arxiv.org/abs/1902.02729},
      archiveprefix = {arXiv},
      eprint = {1902.02729},
      timestamp = {Tue, 21 May 2019 18:03:40 +0200},
      biburl = {https://dblp.org/rec/bib/journals/corr/abs-1902-02729},
      bibsource = {dblp computer science bibliography, https://dblp.org}
    }