Harmonic Networks: Deep Translation and Rotation Equivariance

Published in CVPR, 2017

Code: https://github.com/deworrall92/harmonicConvolutions


Translating or rotating an input image should not affect the results of many computer vision tasks. Convolutional neural networks (CNNs) are already translation equivariant: input image translations produce proportionate feature map translations. This is not the case for rotations. Global rotation equivariance is typically sought through data augmentation, but patch-wise equivariance is more difficult. We present Harmonic Networks or H-Nets, a CNN exhibiting equivariance to patch-wise translation and 360-rotation. We achieve this by replacing regular CNN filters with circular harmonics, returning a maximal response and orientation for every receptive field patch. H-Nets use a rich, parameter-efficient and low computational complexity representation, and we show that deep feature maps within the network encode complicated rotational invariants. We demonstrate that our layers are general enough to be used in conjunction with the latest architectures and techniques, such as deep supervision and batch normalization. We also achieve state-of-the-art classification on rotated-MNIST, and competitive results on other benchmark challenges.

  1. @inproceedings{WorrallGTB17,
      author = {Worrall, Daniel E. and Garbin, Stephan J. and Turmukhambetov, Daniyar and Brostow, Gabriel J.},
      title = {Harmonic Networks: Deep Translation and Rotation Equivariance},
      booktitle = {2017 {IEEE} Conference on Computer Vision and Pattern Recognition,
                   {CVPR} 2017, Honolulu, HI, USA, July 21-26, 2017},
      pages = {7168--7177},
      year = {2017},
      doi = {10.1109/CVPR.2017.758}