We present DietNeRF, a 3D neural scene representation estimated from a few images.
Neural Radiance Fields (NeRF) learn a continuous volumetric representation of a scene through multi-view consistency, and can be rendered from novel viewpoints by ray casting. While NeRF has an impressive ability to reconstruct geometry and fine details given many images, up to 100 for challenging 360° scenes, it often finds a degenerate solution to its image reconstruction objective when only a few input views are available. To improve few-shot quality, we propose DietNeRF. We introduce an auxiliary semantic consistency loss that encourages realistic renderings at novel poses. DietNeRF is trained on individual scenes to (1) correctly render given input views from the same pose, and (2) match high-level semantic attributes across different, random poses. Our semantic loss allows us to supervise DietNeRF from arbitrary poses. We extract these semantics using a pre-trained visual encoder such as CLIP, a Vision Transformer trained on hundreds of millions of diverse single-view, 2D photographs mined from the web with natural language supervision. In experiments, DietNeRF improves the perceptual quality of few-shot view synthesis when learned from scratch, can render novel views with as few as one observed image when pre-trained on a multi-view dataset, and produces plausible completions of completely unobserved regions.
Few-shot novel view synthesis is a challenging problem. (A) With 100 observations of an object, NeRF estimates a detailed and accurate representation purely from multi-view consistency. (B) However, with 8 views, the same NeRF overfits by placing the object in the near-field of the training cameras. (C) NeRF can converge when simplified and tuned, but poorly captures fine detail. (D) Without prior knowledge about similar objects, single-scene view synthesis cannot plausibly complete unobserved regions, such as the left side of an object seen from the right. In this work, we find that these failures occur because NeRF is only supervised from the sparse training poses.
By training with our semantic consistency loss, DietNeRF is able to render plausible novel views given only 8 training images per object (shown on top).
Ajay Jain, Matthew Tancik, Pieter Abbeel. Putting NeRF on a Diet: Semantically Consistent Few-Shot View Synthesis. ICCV, 2021.
@InProceedings{Jain_2021_ICCV,
author = {Jain, Ajay and Tancik, Matthew and Abbeel, Pieter},
title = {Putting NeRF on a Diet: Semantically Consistent Few-Shot View Synthesis},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
month = {October},
year = {2021},
pages = {5885-5894}
}