Skip to content

Papers, Datasets, Benchmarks for 3D Face (Reconstruction, Talking head, etc)

Notifications You must be signed in to change notification settings

winterbloooom/awesome-3d-face

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

16 Commits
 
 

Repository files navigation

Awesome 3D Face (Kor.)

🪄 Table of Contents
Conference/Journal Name Abbr.
Abbr. Full name
AVSS International Conference on Advanced Video and Signal Based Surveillance
CVPR IEEE Conference on Computer Vision and Pattern Recognition
FG IEEE International Conference on Automatic Face & Gesture Recognition
ICCV IEEE International Conference on Computer Vision
J-HGBU Joint ACM Workshop on Human Gesture and Behavior Understanding
MM ACM Multimedia Conference
PR Pattern Recognition
TAG IEEE Transactions on Affective Computing
TIP IEEE Transactions on Image Processing
ToG ACM Transactions on Graphics
TMM IEEE Transactions on Multimedia
TVCG IEEE Transactions on Visualization and Computer Graphics

Note

  • 논문은 발표 연도 역순으로 정렬되었으며, arXiv에 선공개 되었어도 이후 학회/저널에 발표되었으면 그것을 기준으로 합니다.
  • 중요한 논문은 ✨ 표시했습니다.

3D Face Reconstruction

📑 Papers

  • AlbedoGAN (2024 WACV) [Code]
    Towards Realistic Generative 3D Face Models
    Aashish Rai, Hiresh Gupta, Ayush Pandey, Francisco Vicente Carrasco, Shingo Jason Takagi, Amaury Aubel, Daeil Kim, Aayush Prakash, Fernando de la Torre

  • Speech4Mesh (2023 ICCV)
    Speech4Mesh: Speech-Assisted Monocular 3D Facial Reconstruction for Speech-Driven 3D Facial Animation
    Shan He, Haonan He, Shuo Yang, Xiaoyan Wu, Pengcheng Xia, Bing Yin, Cong Liu, LiRong Dai, Chang Xu

  • TokenFace (2023 ICCV)
    Accurate 3D Face Reconstruction with Facial Component Tokens
    Tianke Zhang, Xuangeng Chu, Yunfei Liu, Lijian Lin, Zhendong Yang, Zhengzhuo Xu, Chengkun Cao, Fei Yu, Changyin Zhou, Chun Yuan, Yu Li
    FLAME의 각 파라미터를 decoupling하고자, ViT에 이미지 토큰과 더불어 파라미터를 각각 token으로 만들어 입력함. 2&3D dataset을 동시에 사용함.

  • HiFace (2023 ICCV)
    HiFace: High-Fidelity 3D Face Reconstruction by Learning Static and Dynamic Details
    Zenghao Chai, Tianke Zhang, Tianyu He, Xu Tan, Tadas Baltrušaitis, HsiangTao Wu, Runnan Li, Sheng Zhao, Chun Yuan, Jiang Bian
    Detail을 static detail과 dynamic detail로 나눔. Synthetic dataset을 사용하여 학습.

  • SPECTRE (2023 CVPR) ✨ [Code]
    Visual Speech-Aware Perceptual 3D Facial Expression Reconstruction from Videos
    Panagiotis P. Filntisis, George Retsinas, Foivos Paraperas-Papantoniou, Athanasios Katsamanis, Anastasios Roussos, Petros Maragos

  • FOCUS (2023 CVPR) [Code]
    Robust Model-based Face Reconstruction through Weakly-Supervised Outlier Segmentation
    Chunlu Li, Andreas Morel-Forster, Thomas Vetter, Bernhard Egger, and Adam Kortylewski
    Reconstruction과 outlier segmentation을 동시에 수행하여 outlier에도 robust한 결과를 얻고자 함.

  • HRN (2023 CVPR) [Code]
    A Hierarchical Representation Network for Accurate and Detailed Face Reconstruction from In-The-Wild Images
    Biwen Lei, Jianqiang Ren, Mengyang Feng, Miaomiao Cui, Xuansong Xie

  • DenseLandmarks (2022 ECCV)
    3D Face Reconstruction with Dense Landmarks
    Erroll Wood, Tadas Baltrušaitis, Charlie Hewitt, Matthew Johnson, Jingjing Shen, Nikola Milosavljevic, Daniel Wilde, Stephan Garbin, Chirag Raman, Jamie Shotton, Toby Sharp, Ivan Stojiljkovic, Thomas J. Cashman, Julien Valentin
    700개 이상의 dense landmarks를 uncertainty와 함께 예측함. Synthetic dataset를 사용해 정확한 annotation을 얻을 수 있었음.

  • MICA (2022 ECCV) [Code]
    Towards Metrical Reconstruction of Human Faces
    Wojciech Zielonka, Timo Balkart, Justus Thies
    Metrical한 reconstruction을 수행함. Pre-trained face reconstruction model에서 추출한 feature를 사용함으로써 서로 다른 얼굴에 대한 구분성을 높임. 여러 데이터셋을 합쳐 3D dataset을 구축해 supervised learning을 함.

  • EMOCA (2022 CVPR) ✨ [Code]
    EMOCA: Emotion Driven Monocular Face Capture and Animation
    Radek Danecek, Michael Black, Timo Bolkart
    Expression encoder를 추가로 사용함과 동시에, 입력 이미지와 reconstruction 사이의 emotion 차이로 loss를 부여함으로써, emotion이 잘 보존된 결과를 얻음.

  • SynergyNet (2021 3DV) [Code]
    Synergy between 3DMM and 3D Landmarks for Accurate 3D Facial Geometry
    Cho-Ying Wu, Qiangeng Xu, Ulrich Neumann

  • Dib et al. (2021 ICCV)
    Towards high fidelity monocular face reconstruction with rich reflectance using self-supervised learning and ray tracing
    Abdallah Dib, Cedric Thebault, Junghyun Ahn, Philippe-Henri Gosselin, Christian Theobalt, Louis Chevallier

  • Fast-GANFIT (2021 TPAMI) [Code]
    Fast-GANFIT: Generative Adversarial Network for High Fidelity 3D Face Reconstruction
    Baris Gecer, Stylianos Ploumpis, Irene Kotsia, Stefanos Zafeiriou

  • DECA (2021 SIGGRAPH) ✨ [Code]
    Learning an Animatable Detailed 3D Face Model from In-the-Wild Images
    Yao Feng, Haiwen Feng, Michael J. Black, Timo Bolkart
    Detail을 displacement map으로 예측해 coarse shape에 합쳐 reconstruction을 구성하며, 이로써 다른 이미지의 expression에서 얻은 displacement map으로 교체도 가능. 이후 수많은 모델의 기본 모델로 사용됨.

  • 3DDFA-V2 (2020 ECCV) [Code]
    Towards Fast, Accurate and Stable 3D Dense Face Alignment
    Jianzhu Guo, Xiangyu Zhu, Yang Yang, Fan Yang, Zhen Lei, Stan Z. Li

  • MGCNet (2020 ECCV) [Code]
    Self-Supervised Monocular 3D Face Reconstruction by Occlusion-Aware Multi-view Geometry Consistency
    Jiaxiang Shang, Tianwei Shen, Shiwei Li, Lei Zhou, Mingmin Zhen, Tian Fang, Long Quan

  • UMDFA (2020 ECCV)
    “Look Ma, no landmarks!” – Unsupervised, model-based dense face alignment
    Tatsuro Koizumi, William A. P. Smith

  • GANFIT (2019 CVPR) [Code]
    GANFIT: Generative Adversarial Network Fitting for High Fidelity 3D Face Reconstruction
    Baris Gecer, Stylianos Ploumpis, Irene Kotsia, Stefanos Zafeiriou

  • Luan Tran et al. (2019 CVPR) [Code]
    Towards High-Fidelity Nonlinear 3D Face Morphable Model
    Luan Tran, Feng Liu, Xiaoming Liu

  • RingNet, NoW (2019 CVPR) ✨ [Code]
    Learning to Regress 3D Face Shape and Expression from an Image without 3D Supervision
    Soubhik Sanyal, Timo Bolkart, Haiwen Feng and Michael J. Black
    같은 사람의 사진 간에는 shape consistency를 높이고, 서로 다른 사람의 사진 간에는 shape inconsistency를 낮추도록 학습. 현재 가장 흔히 사용되는 NoW benchmark를 제시함.

  • Deep3DFaceRecon (2019 CVPRw) [Code]
    Accurate 3D Face Reconstruction with Weakly-Supervised Learing: From Single Image to Image Set
    Yu Deng, Jiaolong Yang, Sicheng Xu, Dong Chen, Yunde Jia, Xin Tong
    GT 없이도 landmark나 facial mask 등으로 self-supervised learning 수행. Confidence measurement subnetwork를 추가로 두어 multi-image reconstruction을 학습.

  • PRNet (2018 ECCV) [Code]
    Joint 3D Face Reconstruction and Dense Alignment with Position Map Regression Network
    Yao Feng, Fan Wu, Xiaohu Shao, Yanfeng Wang, and Xi Zhou

  • Anh Tuấn Trần et al. (2018 CVPR) [Code]
    Extreme 3D Face Reconstruction: Seeing Through Occlusions
    Anh Tuấn Trần, Tal Hassner, Iacopo Masi, Eran Paz, Yuval Nirkin, Gérard Medioni

  • Ayush Tewari et al. (2018 CVPR)
    Self-supervised multi-level face model learning for monocular reconstruction at over 250 Hz
    Ayush Tewari, Michael Zollhöfer, Pablo Garrido, Florian Bernard, Hyeongwoo Kim, Patrick Pérez, Christian Theobalt

  • Zhen-Hua Feng et al. (2018) (2018 FG) [Code]
    Evaluation of dense 3D reconstruction from 2D face images in the wild
    Zhen-Hua Feng, Patrik Huber, Josef Kittler, Peter JB Hancock, Xiao-Jun Wu, Qijun Zhao, Paul Koppen, Matthias Rätsch

  • AffectNet (2017 TAC) [Code]
    AffectNet: A Database for Facial Expression, Valence, and Arousal Computing in the Wild
    Ali Mollahosseini, Behzad Hasani, Mohammad H. Mahoor

  • 3DMM-CNN (2017 CVPR) [Code]
    Regressing Robust and Discriminative 3D Morphable Models With a Very Deep Neural Network
    Anh Tuấn Trần, Tal Hassner, Iacopo Masi, Gerard Medioni

🫥 Facial/Head Models

  • LYHM (2020 ICCV) [Link]
    Statistical Modeling of Craniofacial Shape and Texture
    Hang Dai, Nick Pears, William Smith, Christian Duncan
    (논문까지 읽을 필요는 없음)

  • FLAME (2017 SIGGRAPH) ✨ [Code]
    Learning a model of facial shape and expression from 4D scans
    Tianye Li, Timo Bolkart, Michael J. Black, Hao Li, Javier Romero
    3DMM 식 facial model. Shape, expression, pose 파라미터로 얼굴을 표현함. (논문까지 읽을 필요는 없음)

  • BFM (Basel Face Model) (2009 AVSS) ✨ [Code]
    A 3D Face Model for Pose and Illumination Invariant Face Recognition
    Pascal Paysan, Reinhard Knothe, Brian Amberg, Sami Romdhani, Thomas Vetter
    3DMM 식 facial model. FLAME 이전 및 FLAME과 함께 자주 사용되는 모델. (논문까지 읽을 필요는 없음)

  • SCAPE (2005 SIGGRAPH) ✨
    SCAPE: shape completion and animation of people
    Dragomir Anguelov, Praveen Srinivasan, Daphne Koller, Sebastian Thrun, Jim Rodgers, James Davis
    (논문까지 읽을 필요는 없음)

  • 3DMM (1999 SIGGRAPH) ✨
    A morphable model for the synthesis of 3d faces
    Volker Blanz, Thomas Vetter
    사람의 3D 모델링 방법으로, shape & texture를 vector space에서 표현하며, bases의 linear combination으로 표현 가능. (논문까지 읽을 필요는 없음)

🎖️ Leaderboards, Benchmarks

  • NoW (2019 CVPR) ✨
    사람 얼굴의 scan (GT)과 모델 결과의 mesh (prediction) 간의 거리를 측정해 median(↓), mean(↓), std(↓)를 비교. Expression, occlusion, varing views 상황을 구분하여 데이터셋을 구성함.

  • REALY (2022 ECCV)

  • Feng et al.(2018) () ✨
    Scan (GT)과 recontructed mesh (prediction)를 rigid align 한 뒤, scan의 vertices로부터 가장 가까운 recontructed mesh (prediction) 상의 거리를 측정. (NoW와 유사)

  • Stirling ()

📦 Datasets

  • MICA Dataset (2022 ECCV) [Link]
    Towards Metrical Reconstruction of Human Faces
    Wojciech Zielonka, Timo Balkart, Justus Thies
    MICA에서 여러 데이터셋을 모아 FLAME으로 fitting함.

  • LYHM, Headspace Dataset (2020 ICCV) [Link]
    Statistical Modeling of Craniofacial Shape and Texture
    Hang Dai, Nick Pears, William Smith, Christian Duncan
    LYHM head model 기반으로 만들어진 데이터셋.

  • FRGC (2005 CVPR) [Link]
    Overview of the face recognition grand challenge

  • Stirling (2018 FG)

  • D3DFACS (2011 ICCV) [Link]
    A FACS Valid 3D Dynamic Action Unit Database with Applications to 3D Dynamic Morphable Facial Modeling
    Darren Cosker, Eva Krumhuber, Adrian Hilton

  • Florence 2D/3D (2011 J-HGBU) [Link]
    The Florence 2D/3D Hybrid Face Dataset
    Andrew D. Bagdanov, Alberto Del Bimbo, Iacopo Masi

  • FaceWarehouse (2013 TVCG) [Link]
    FaceWarehouse: A 3D Facial Expression Database for Visual Computing
    Chen Cao, Yanlin Weng, Shun Zhou, Yiying Tong, Kun Zhou

  • AR database (1998) [Link]
    The ar face database A.M. Martinez and R. Benavente

3D Facial Animation

📑 Papers

  • EMOTE (SIGGRAPH Asia 2023) [Code]
    Emotional Speech-Driven Animation with Content-Emotion Disentanglement
    Radek Daněček, Kiran Chhatre, Shashank Tripathi, Yandong Wen, Michael J. Black, Timo Bolkart

  • EmoTalk (2023 ICCV) [Code]
    EmoTalk: Speech-driven emotional disentanglement for 3D face animation
    Ziqiao Peng, Haoyu Wu, Zhenbo Song, Hao Xu, Xiangyu Zhu, Jun He, Hongyan Liu, Zhaoxin Fan

  • CodeTalker (2023 CVPR) [Code]
    CodeTalker: Speech-Driven 3D Facial Animation with Discrete Motion Prior
    Jinbo Xing, Menghan Xia, Yuechen Zhang, Xiaodong Cun, Jue Wang, Tien-Tsin Wong

  • SelfTalk (2023 ACM MM) [Code]
    SelfTalk: A Self-Supervised Commutative Training Diagram to Comprehend 3D Talking Faces
    Ziqiao Peng, Yihao Luo, Yue Shi, Hao Xu, Xiangyu Zhu, Jun He, Hongyan Liu, Zhaoxin Fan
    Lip shape을 더 정확하게 만들기 위해, 예측 결과 생성된 facial animation을 lip-reading interpreter에 넣어 text를 생성한 뒤, speech recognizer이 생성한 text와 비교함.

  • MeshTalk (2022 ICCV) [Code]
    MeshTalk: 3D Face Animation from Speech using Cross-Modality Disentanglement
    Alexander Richard, Michael Zollhofer, Yandong Wen, Fernando de la Torre, Yaser Sheikh
    Latent space에서 audio-correlated와 uncorrelated information을 구분함. Lip shape와 upper face의 더 정확한 animation을 생성하고자 함.

  • FaceFormer (2022 CVPR) ✨ [Code]
    FaceFormer: Speech-Driven 3D Face Animation with Transformers
    Yingrou Fan, Zhaojiang Lin, Jun Saito, Wengping Wang, Taku Komura
    Transformer-based 구조에 오디오를 context로 입력받아 autoregressive하게 face mesh를 생성.

  • VOCA, VOCASET (2019 CVPR) ✨ [Code]
    Capture, Learning, and Synthesis of 3D Speaking Styles
    Daniel Cudeiro, Timo Bolkart, Cassidy Laidlaw, Anurag Ranjan, Michael J. Black
    Speaker-dependent였던 기존 연구와는 다르게, 오디오가 주어졌을 때 speaker-independent인 facial animation을 생성함. Speech가 있는 4D face scans로 이루어진 VOCASET 데이터셋 구축.

  • Tero Karras et al. (2017) (2017 SIGGRAPH) ✨
    Audio-driven facial animation by joint end-to-end learning of pose and emotion
    Tero Karras, Timo Aila, Samuli Laine, Antti Herva, Jaakko Lehtinen

  • JALI (2016 ToG)
    JALI: an animator-centric viseme model for expressive lip synchronization
    Pif Edwards, Chris Landreth, Eugene Fiume, Karan Singh

📦 Datasets

  • CelebV-HQ (2022 ECCV) ✨ [Link]
    CelebVHQ: A large-scale video facial attributes dataset
    Hao Zhu, Wayne Wu, Wentao Zhu, Liming Jiang, Siwei Tang, Li Zhang, Ziwei Liu, and Chen Change Loy
    YouTube에서 수집된 비디오 데이터셋

  • VoxCeleb2 (2018 INTERSPEECH) ✨ [Link]
    Voxceleb2: Deep speaker recognition
    Joon Son Chung, Arsha Nagrani, Andrew Zisserman
    YouTube에서 수집된 비디오 데이터셋

  • MEAD (2019 ECCV) ✨ [Link]
    Mead: A large-scale audio-visual dataset for emotional talking-face generation
    Kaisiyuan Wang, Qianyi Wu, Linsen Song, Zhuoqian Yang, Wayne Wu, Chen Qian, Ran He, Yu Qiao, Chen Change Loy
    스튜디오에서 촬영된 비디오 데이터셋. Multi-view이며, emotion과 그 intensity를 다양하게 촬영함.

  • VOCA, VOCASET (2019 CVPR) ✨ [Link]
    Capture, Learning, and Synthesis of 3D Speaking Styles
    Daniel Cudeiro, Timo Bolkart, Cassidy Laidlaw, Anurag Ranjan, Michael J. Black
    Speaker-dependent였던 기존 연구와는 다르게, 오디오가 주어졌을 때 speaker-independent인 facial animation을 생성함. Speech가 있는 4D face scans로 이루어진 VOCASET 데이터셋 구축.

  • CoMA (2018 ECCV)
    Generating 3D faces using Convolutional Mesh Autoencoders
    Anurag Ranjan, Timo Bolkart, Soubhik Sanyal, Michael J. Black

  • BIWI (2010 TMM) ✨ [Link]
    A 3-D Audio-Visual Corpus of Affective Communication
    Gabriele Fanelli, Thibaut Weise, Juergen Gall, Luc Van Gool

Other Lists Related with Face

About

Papers, Datasets, Benchmarks for 3D Face (Reconstruction, Talking head, etc)

Topics

Resources

Stars

Watchers

Forks