Structural Inpainting

Huy V. Vo
Ecole Polytechnique
Ngoc Q. K. Duong
Patrick Pérez


Scene-agnostic visual inpainting remains very challenging despite progress in patch-based methods. Recently, Pathak et al. 2016 have introduced convolutional "context encoders" (CEs) for unsupervised feature learning through image completion tasks. With the additional help of adversarial training, CEs turned out to be a promising tool to complete complex structures in real inpainting problems. In the present paper we propose to push further this key ability by relying on perceptual reconstruction losses at training time. We show on a wide variety of visual scenes the merit of the approach for structural inpainting, and confirm it through a user study. Combined with the optimization-based refinement of Yang et al. 2016 with neural patches, our context encoder opens up new opportunities for prior-free visual inpainting. .


author = {Vo, Huy V. and Duong, Ngoc Q. K. and P\'{e}rez, Patrick},
title = {Structural Inpainting},
year = {2018},
booktitle = {Proceedings of the 26th ACM International Conference on Multimedia ({ACMMM})},
pages = {1948–1956},