This repository contains the official code for the paper "UV-IDM: Identity-Conditioned Latent Diffusion Model for Face UV-Texture Generation", presented at CVPR 2024.
3D face reconstruction aims to generate high-fidelity 3D face shapes and textures from single-view or multi-view images. However, current prevailing facial texture generation methods generally suffer from low-quality texture, identity information loss, and inadequate handling of occlusions. To solve these problems, we introduce an Identity-Conditioned Latent Diffusion Model for face UV-texture generation (UV-IDM) to generate photo-realistic textures based on the Basel Face Model (BFM). UV-IDM leverages the powerful texture generation capacity of a latent diffusion model (LDM) to obtain detailed facial textures. To preserve the identity during the reconstruction procedure, we design an identity-conditioned module that can utilize any in-the-wild image as a robust condition for the LDM to guide texture generation. UV-IDM can be easily adapted to different BFM-based methods as a high-fidelity texture generator. Furthermore, in light of the limited accessibility of most existing UV-texture datasets, we build a large-scale and publicly available UV-texture dataset based on BFM, termed BFM-UV. Extensive experiments show that our UV-IDM can generate high-fidelity textures in 3D face reconstruction within seconds while maintaining image consistency, bringing new state-of-the-art performance in facial texture generation.
- Update Gradio demo
- Release datasets
- Release train code
- Release infer code
Before you begin, ensure you have met the following requirements:
- Ensure you have GPU(s) with CUDA support (NVIDIA recommended).
To install this project, clone it using Git and install the dependencies:
git clone https://github.com/username/UV-IDM.git
cd UV-IDM
Please first download our checkpoint file in this link google-link, and place them in the folder.
- ./
- checkpoints/
- ...
- pretrained/
- ...
- BFM/
- ...
- third_party/
- ...
- checkpoints/
We recommend using Anaconda for environment and package management.
conda env create -f environment.yaml
conda activate uvidm
git clone https://github.com/NVlabs/nvdiffrast
pip install -e nvdiffrast
Coming Soon.
Coming Soon.
We recommend you to generate a filelist that contains the absolute path of your images. A possible demo is in test_imgs. Our network will generate three output, containing the render image, UV map and obj file.
You can start with our provided example by:
CUDA_VISIBLE_DEVICES=0 python scripts/visualize.py --images_list_file test.txt --outdir test_imgs/output
CUDA_VISIBLE_DEVICES=0 python scripts/visualize.py --images_list_file your_txt_list --outdir your_output_path
If you find UV-IDM useful for your research and applications, please cite us using this BibTeX:
@inproceedings{li2024uv,
title={UV-IDM: Identity-Conditioned Latent Diffusion Model for Face UV-Texture Generation},
author={Li, Hong and Feng, Yutang and Xue, Song and Liu, Xuhui and Zeng, Bohan and Li, Shanglin and Liu, Boyu and Liu, Jianzhuang and Han, Shumin and Zhang, Baochang},
booktitle={CVPR},
year={2024}
The work was supported by the following funding sources:
- National Key Research and Development Program of China (2023YFC3300029)
- Zhejiang Provincial Natural Science Foundation of China (LD24F020007)
- Beijing Natural Science Foundation (L223024)
- National Natural Science Foundation of China (62076016)
- "One Thousand Plan" projects in Jiangxi Province (Jxsg2023102268)
- Beijing Municipal Science & Technology Commission
- Administrative Commission of Zhongguancun Science Park (Z231100005923035)