Towards High-Fidelity 3D Portrait Generation with Rich Details by Cross-View Prior-Aware Diffusion

Haoran Wei 1,   Wencheng Han 1,   Xingping Dong 2,   Jianbing Shen 1,  

1 University of Macao

2 Wuhan University

Abstract

Recent diffusion-based Single-image 3D portrait generation methods typically employ 2D diffusion models to provide multi-view knowledge, which is then distilled into 3D representations. However, these methods usually struggle to produce high-fidelity 3D models, frequently yielding excessively blurred textures. We attribute this issue to the insufficient consideration of cross-view consistency during the diffusion process, resulting in significant disparities between different views and ultimately leading to blurred 3D representations. In this paper, we address this issue by comprehensively exploiting multi-view priors in both the conditioning and diffusion procedures to produce consistent, finely textured portraits. From the conditioning standpoint, we propose a Hybrid Priors Diffusion model, which explicitly and implicitly incorporates multi-view priors as conditions to enhance the status consistency of the generated multi-view portraits. From the diffusion perspective, considering the significant impact of the diffusion noise distribution on detailed texture generation, we propose a Multi-View Noise Resampling Strategy integrated within the optimization process leveraging cross-view priors to enhance representation consistency. Extensive experiments demonstrate that our method can produce 3D portraits with accurate geometry and fine-grained textures from a single image.

Framework Overview

The Portrait Diffusion Framework. This framework comprises three integral modules. GAN-prior Portrait Initialization employs existing Portrait GAN priors to derive initial tri-plane NeRF features from frontal-view portrait images. Portrait Geometry Restoration focuses on reconstructing the geometry using these initialized tri-planes. Multi-view Diffusion Texture Refinement transforms coarse textures into detailed representations.

Core Contributions

The presentations of our proposed Hybrid Priors Portrait Diffusion model (a) and Multi-View Noise Resampling Strategy (b). HPDM is designed to leverage various multi-view priors in a hybrid manner to condition the new view synthetic process for more consistent status. NV-NRS is designed to transfer cross-view priors to control the diffusion noise distribution for representations alignment.

Comparisons