- cross-posted to:
- stable_diffusion@lemmy.dbzer0.com
- cross-posted to:
- stable_diffusion@lemmy.dbzer0.com
Abstract
We propose the first video diffusion framework for reference-based lineart video colorization. Unlike previous works that rely solely on image generative models to colorize lineart frame by frame, our approach leverages a large-scale pretrained video diffusion model to generate colorized animation videos. This approach leads to more temporally consistent results and is better equipped to handle large motions. Firstly, we introduce Sketch-guided ControlNet which provides additional control to finetune an image-to-video diffusion model for controllable video synthesis, enabling the generation of animation videos conditioned on lineart. We then propose Reference Attention to facilitate the transfer of colors from the reference frame to other frames containing fast and expansive motions. Finally, we present a novel scheme for sequential sampling, incorporating the Overlapped Blending Module and Prev-Reference Attention, to extend the video diffusion model beyond its original fixed-length limitation for long video colorization. Both qualitative and quantitative results demonstrate that our method significantly outperforms state-of-the-art techniques in terms of frame and video quality, as well as temporal consistency. Moreover, our method is capable of generating high-quality, long temporal-consistent animation videos with large motions, which is not achievable in previous works. Our code and model are available at this https URL.
Paper: https://arxiv.org/abs/2409.12960
Project Page: https://luckyhzt.github.io/lvcd
Code: (coming soon)
Supplementary Demo clips: https://luckyhzt.github.io/lvcd/supplementary/supplementary.html
Let me know if this kind of post isn’t allowed here. It looked alright to post per the rules.
It’s not against the rules and is relevant to the community so you’re fine.