Beyond Planar Symmetry:

Modeling human perception of reflection and rotation symmetries in the wild

Figure 1. Sample training images from the Microsoft COCO dataset [39]. Symmetry ground-truths (GTs, mid-column) are computed from 2 or more human labels (statistics shown in Figure 2): line segments for reflection symmetry axis and red dots for rotation symmetry centers. Right column – predicted heatmaps, reflection symmetry axes: green; rotation symmetry centers: red.

Humans take advantage of real world symmetries for various tasks, yet capturing their superb symmetry perception mechanism into a computational model remains elusive. Encouraged by a new discovery (CVPR 2016) demonstrating extremely high inter-person accuracy of human perceived symmetries in the wild, we have created the first deep-learning neural network for reflection and rotation symmetry detection (Sym-NET), trained on photos from MS-COCO (Common Object in COntext) dataset with nearly 11K symmetry-labels from more than 400 human observers. We employ novel methods to convert discrete human labels into symmetry heatmaps, capture symmetry densely in an image and quantitatively evaluate Sym-NET against multiple existing computer vision algorithms. Using the symmetry competition testsets from CVPR 2013 and unseen MS-COCO photos, Sym-NET comes out as the winner with significantly superior performance over all other competitors. Beyond mathematically well-defined symmetries on a plane, Sym-NET demonstrates abilities to identify viewpoint-varied 3D symmetries, partially occluded symmetrical objects and symmetries at a semantic level.

Publications:

Beyond Planar Symmetry:

Modeling human perception of reflection and rotation symmetries in the wild

    Christopher Funk and Yanxi Liu

    ArXiv 2017.