Inversion-Based Face Swapping With Diffusion Model

Face swapping involves replacing a face in an image with another face and ensuring the seamless integration of the source face into the target image. Previous studies have primarily utilized generative adversarial network-based models for face swapping. This paper introduces inversion-based face swa...

Full description

Saved in:
Bibliographic Details
Main Authors: Daehyun Yoo, Hongchul Lee, Jiho Kim
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10804772/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Face swapping involves replacing a face in an image with another face and ensuring the seamless integration of the source face into the target image. Previous studies have primarily utilized generative adversarial network-based models for face swapping. This paper introduces inversion-based face swapping (InFS), a novel framework employing diffusion inversion. The key contributions of our work include: 1) a facial attribute encoder that consolidates attribute information into a single embedding vector, utilizing the architecture of the pSp encoder, and 2) an enhanced face swapping pipeline that overcomes pose limitations through reenactment preprocessing addressing the challenge of incorrect face swapping at extreme angles. To preserve the target image’s attribute information that may be lost during the diffusion inversion process, we incorporate the extracted information from the facial attribute encoder. This embedding vector serves as a crucial condition in the diffusion inversion process, facilitating the prediction of noisy images. Subsequently, the predicted noisy image undergoes processing using a pretrained ID conditional DDPM for face swapping. Our experimental results show that InFS outperforms state-of-the-art methods in preserving identity, expression, and shape characteristics of target images. Furthermore, the proposed InFS achieved effective face swapping results without requiring additional guidance and reduces inference time by approximately 6.96 seconds compared to previous diffusion-based approaches.
ISSN:2169-3536