TCAN: Animating Human Images with
Temporally Consistent Pose Guidance Using Diffusion Models
ECCV 24

Jeongho Kim*
KAIST
Min-Jung Kim*
KAIST
Junsoo Lee
NAVER WEBTOON AI
Jaegul Choo
KAIST
(*: equal contribution)
icon-pdf   Paper
icon-arxiv   arXiv
icon-github   Code

Motion Transfer To Various Identities


Motion Transfer To Animation Characters

Additional Results



Overall Architecture

Problem Definition

Given a source image and a driving video with F frames, our aim is to generate a video that:

Our approach

We propose TCAN, a novel human image animation framework based on the diffusion model that maintains temporal consistency and generalizes well to unseen domains. Our newly proposed modules are as follows: