Skip to content

DaddyJin/awesome-faceReenactment

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

54 Commits
 
 

Repository files navigation

Awesome Face Reenactment/Talking Face Generation

Papers about FaceSwap in Deepfake are also summarized in the repo awesome-faceSwap.

Survey

  • The Creation and Detection of Deepfakes: A Survey (arXiv 2020) [paper]
  • DeepFakes and Beyond: A Survey of Face Manipulation and Fake Detection (arXiv 2020) [paper]
  • A Review on Face Reenactment Techniques (I4Tech 2020) [paper]
  • What comprises a good talking-head video generation?: A Survey and Benchmark (arXiv 2020) [paper]
  • Deep Audio-Visual Learning: A Survey (arXiv 2020) [paper]
  • Critical review of human face reenactment methods (JIG 2022) (Chinese)[paper]

Talking Face Generation Papers

2023

  • MODA: Mapping-Once Audio-driven Portrait Animation with Dual Attentions (ICCV, 2023) [paper] [project]
  • Efficient Region-Aware Neural Radiance Fields for High-Fidelity Talking Portrait Synthesis (ICCV, 2023) [paper]
  • High-fidelity Generalized Emotional Talking Face Generation with Multi-modal Emotion Space Learning (ICCV, 2023) [paper]
  • EMMN: Emotional Motion Memory Network for Audio-driven Emotional Talking Face Generation(ICCV, 2023) [paper]
  • Efficient Emotional Adaptation for Audio-Driven Talking-Head Generation (ICCV, 2023) [paper] [code]
  • Talking Head Generation with Probabilistic Audio-to-Visual Diffusion Priors (ICCV, 2023) [paper]
  • DiffTalk: Crafting Diffusion Models for Generalized Audio-Driven Portraits Animation (CVPR, 2023) [paper] [code]
  • Progressive Disentangled Representation Learning for Fine-Grained Controllable Talking Head Synthesis (CVPR, 2023) [paper] [webpage]
  • One-Shot High-Fidelity Talking-Head Synthesis with Deformable Neural Radiance Field (CVPR, 2023) [paper] [webpage]
  • Seeing What You Said: Talking Face Generation Guided by a Lip Reading Expert (CVPR, 2023) [paper]
  • LipFormer: High-Fidelity and Generalizable Talking Face Generation With a Pre-Learned Facial Codebook (CVPR, 2023) [paper]
  • High-fidelity Generalized Emotional Talking Face Generation with Multi-modal Emotion Space Learning (CVPR, 2023) [paper]
  • OTAvatar : One-shot Talking Face Avatar with Controllable Tri-plane Rendering (CVPR, 2023) [paper] [code]
  • IP_LAP: Identity-Preserving Talking Face Generation with Landmark and Appearance Priors (CVPR, 2023) [code]
  • SadTalker: Learning Realistic 3D Motion Coefficients for Stylized Audio-Driven Single Image Talking Face Animation (CVPR, 2023) [paper] [webpage] [code]
  • DPE: Disentanglement of Pose and Expression for General Video Portrait Editing (CVPR, 2023) [paper] [webpage] [code]
  • StyleTalk: One-shot Talking Head Generation with Controllable Speaking Styles (AAAI, 2023) [paper] [code]
  • DINet: Deformation Inpainting Network for Realistic Face Visually Dubbing on High Resolution Video (AAAI, 2023) [paper] [code]
  • Audio-Visual Face Reenactment (WACV, 2023) [paper] [webpage] [code]
  • Emotionally Enhanced Talking Face Generation (Arxiv, 2023) [paper] [webpage] [code]
  • Compact Temporal Trajectory Representation for Talking Face Video Compression (TCSVT, 2023) [paper]

2022

  • VideoReTalking: Audio-based Lip Synchronization for Talking Head Video Editing In the Wild (SIGGRAPH ASIA, 2022) [paper] [code]
  • Learning Dynamic Facial Radiance Fields for Few-Shot Talking Head Synthesis (ECCV, 2022) [paper] [code]
  • Compressing Video Calls using Synthetic Talking Heads (BMVC, 2022) [paper] [webpage]
  • Expressive Talking Head Generation With Granular Audio-Visual Control (CVPR, 2022) [paper]
  • EAMM: One-Shot Emotional Talking Face via Audio-Based Emotion-Aware Motion Model (SIGGRAPH, 2022) [paper]
  • Emotion-Controllable Generalized Talking Face Generation (IJCAI, 2022) [paper]
  • StyleHEAT: One-Shot High-Resolution Editable Talking Face Generation via Pretrained StyleGAN (ECCV, 2022) [paper]
  • SyncTalkFace: Talking Face Generation with Precise Lip-syncing via Audio-Lip Memory (AAAI, 2022) [paper]
  • One-shot Talking Face Generation from Single-speaker Audio-Visual Correlation Learning (AAAI, 2022) [paper]
  • Audio-Driven Talking Face Video Generation with Dynamic Convolution Kernels (TMM, 2022) [paper]
  • Audio-driven Dubbing for User Generated Contents via Style-aware Semi-parametric Synthesis (TCSVT, 2022) [paper]

2021

  • Live Speech Portraits: Real-Time Photorealistic Talking-Head Animation (SIGGRAPH ASIA, 2021) [paper]
  • Imitating Arbitrary Talking Style for Realistic Audio-DrivenTalking Face Synthesis (MM, 2021) [paper] [code]
  • Towards Realistic Visual Dubbing with Heterogeneous Sources (MM, 2021) [paper]
  • Talking Head Generation with Audio and Speech Related Facial Action Units (BMVC, 2021) [paper]
  • 3D Talking Face with Personalized Pose Dynamics (TVCG, 2021) [paper]
  • Talking Head Generation with Audio and Speech Related Facial Action Units (BMVC, 2021) [paper]
  • FACIAL: Synthesizing Dynamic Talking Face with Implicit Attribute Learning (ICCV, 2021) [paper] [code]
  • AD-NeRF: Audio Driven Neural Radiance Fields for Talking Head Synthesis (ICCV, 2021) [paper] [code]
  • Audio2Head: Audio-driven One-shot Talking-head Generation with Natural Head Motion (IJCAI, 2021) [paper]
  • Flow-Guided One-Shot Talking Face Generation With a High-Resolution Audio-Visual Dataset (CVPR, 2021) [paper]
  • Pose-Controllable Talking Face Generation by Implicitly Modularized Audio-Visual Representation (CVPR, 2021) [paper] [code]
  • Audio-Driven Emotional Video Portraits (CVPR, 2021) [paper] [code]
  • Everything's Talkin': Pareidolia Face Reenactment (CVPR, 2021) [paper]
  • APB2FaceV2: Real-Time Audio-Guided Multi-Face Reenactment (ICASSP, 2021) [paper] [code]

2020

  • Talking-head Generation with Rhythmic Head Motion (ECCV, 2020) [paper] [code]
  • MEAD: A Large-Scale Audio-Visual Dataset for Emotional Talking-Face Generation (ECCV, 2020) [paper] [code]
  • A Lip Sync Expert Is All You Need for Speech to Lip Generation In The Wild (MM, 2020) [paper] [code]
  • Arbitrary Talking Face Generation via Attentional Audio-Visual Coherence Learning (IJCAI, 2020) [paper]
  • APB2Face: Audio-guided face reenactment with auxiliary pose and blink signals (ICASSP, 2020) [paper] [code]
  • MakeItTalk: Speaker-Aware Talking Head Animation (SIGGRAPH ASIA, 2020) [paper] [code]
  • Everybody’s Talkin’: Let Me Talk as You Want (arXiv, 2020) [paper]
  • Audio-driven Talking Face Video Generation with Learning-based Personalized Head Pose (arXiv, 2020) [paper] [code]
  • Multimodal inputs driven talking face generation with spatial–temporal dependency (TCSVT, 2020) [paper]

2019

  • Talking Face Generation by Adversarially Disentangled Audio-Visual Representation(AAAI, 2019) [paper]
  • Towards Automatic Face-to-Face Translation (MM, 2019) [paper] [code]
  • Few-Shot Adversarial Learning of Realistic Neural Talking Head Models (ICCV, 2019) [paper]
  • Learning the Face Behind a Voice (CVPR, 2019) [paper] [code]
  • Hierarchical Cross-Modal Talking Face Generation with Dynamic Pixel-Wise Loss (CVPR, 2019) [paper] [code]
  • Wav2Pix: Speech-conditioned Face Generation using Generative Adversarial Networks (ICASSP, 2019) [paper] [code]
  • Face Reconstruction from Voice using Generative Adversarial Networks (NIPS, 2019) [paper]
  • Talking Face Generation by Conditional Recurrent Adversarial Network (IJCAI, 2019) [paper] [code]

Face Reenactment Papers

2024

  • 3D-Aware Talking-Head Video Motion Transfer (WACV, 2024) [paper]

2023

  • High-Fidelity and Freely Controllable Talking Head Video Generation (CVPR, 2023) [paper]
  • MetaPortrait: Identity-Preserving Talking Head Generation with Fast Personalized Adaptation (CVPR, 2023) [paper] [webpage] [code]
  • HR-Net: a landmark based high realistic face reenactment network (TCSVT, 2023) [paper]

2022

  • Dual-Generator Face Reenactment (CVPR, 2022) [paper]
  • Depth-Aware Generative Adversarial Network for Talking Head Video Generation (CVPR, 2022) [paper]
  • Latent Image Animator: Learning to Animate Images via Latent Space Navigation (ICLR, 2022) [paper]
  • Finding Directions in GAN’s Latent Space for Neural Face Reenactment (BMVC, 2022) [paper][code]
  • FSGANv2: Improved Subject Agnostic Face Swapping and Reenactment (PAMI, 2022) [paper]

2021

  • PIRenderer: Controllable Portrait Image Generation via Semantic Neural Rendering (ICCV, 2021) [paper]
  • LI-Net: Large-Pose Identity-Preserving Face Reenactment Network (ICME, 2021) [paper]
  • One-shot Face Reenactment Using Appearance Adaptive Normalization (AAAI, 2021) [paper]
  • A unified framework for high fidelity face swap and expression reenactment (TCSVT, 2021) [paper]

2020

  • One-Shot Free-View Neural Talking-Head Synthesis for Video Conferencing (arXiv, 2020) [paper]
  • FACEGAN: Facial Attribute Controllable rEenactment GAN (WACV, 2020) [paper]
  • LandmarkGAN: Synthesizing Faces from Landmarks (arXiv, 2020) [paper]
  • Fast Bi-layer Neural Synthesis of One-Shot Realistic Head Avatars (ECCV, 2020) [paper] [code]
  • Mesh Guided One-shot Face Reenactment using Graph Convolutional Networks (MM, 2020) [paper]
  • Learning Identity-Invariant Motion Representations for Cross-ID Face Reenactment (CVPR, 2020) [paper]
  • ReenactNet: Real-time Full Head Reenactment (arXiv, 2020) [paper]
  • FReeNet: Multi-Identity Face Reenactment (CVPR, 2020) [paper] [code]
  • FaR-GAN for One-Shot Face Reenactment (CVPRW, 2020) [paper]
  • One-Shot Identity-Preserving Portrait Reenactment (, 2020) [paper]
  • Neural Head Reenactment with Latent Pose Descriptors (CVPR, 2020) [paper] [code]
  • ActGAN: Flexible and Efficient One-shot Face Reenactment (IWBF, 2020) [paper]
  • Realistic Face Reenactment via Self-Supervised Disentangling of Identity and Pose (AAAI, 2020) [paper]
  • First Order Motion Model for Image Animation (NIPS, 2020) [paper] [code]

2019

  • FLNet: Landmark Driven Fetching and Learning Network for Faithful Talking Facial Animation Synthesis (AAAI, 2019) [paper]
  • MarioNETte: Few-shot Face Reenactment Preserving Identity of Unseen Targets (AAAI, 2019) [paper]
  • Any-to-one Face Reenactment Based on Conditional Generative Adversarial Network (APSIPA, 2019) [paper]
  • Make a Face: Towards Arbitrary High Fidelity Face Manipulation (ICCV, 2019) [paper]
  • One-shot Face Reenactment (BMVC, 2019) [paper] [code]
  • Deferred neural rendering: image synthesis using neural textures (TOG, 2019) [paper]
  • Animating Arbitrary Objects via Deep Motion Transfer (CVPR, 2019) [paper] [code]
  • FSGAN: Subject Agnostic Face Swapping and Reenactment (ICCV, 2019) [paper] [code]

2018

  • GANimation: Anatomically-aware Facial Animation from a Single Image (ECCV, 2018) [paper] [code]
  • ReenactGAN: Learning to Reenact Faces via Boundary Transfer (ECCV, 2018) [paper] [code]
  • Deep Video Portraits (SIGGRAPH, 2018) [paper]
  • X2Face: A Network for Controlling Face Generation Using Images, Audio, and Pose Codes (ECCV, 2018) [paper] [code]

2016

  • Face2Face: Real-time Face Capture and Reenactment of RGB Videos (CVPR, 2016) [paper]

2018

  • Lip movements generation at a glance (ECCV, 2018) [paper]

2017

  • Synthesizing Obama: learning lip sync from audio (TOG, 2017) [paper]
  • You said that? (BMVC, 2017) [paper] [code]

About

papers about Face Reenactment/Talking Face Generation

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published