Abstract
Existing personalization generation methods, such as Textual Inversion, DreamBooth, and LoRA, have made significant progress in custom image creation. However, these works require expensive computational resources and time for fine-tuning, and require multiple reference images, which limits their application in the real world. InstantID addresses these limitations by leveraging a plug-and-play module, enabling it to adeptly handle image personalization in any style using only one face image while maintaining high fidelity. To preserve the face identity, we introduce a novel face encoder to retain the intricate details of the reference image. InstantID’s performance and efficiency in diverse scenarios show its potentiality in various real-world applications. Our work is compatible with common pretrained text-to-image diffusion models such as SD1.5 and SDXL as a plugin. Code and pre-trained checkpoints will be made public soon!
Paper: https://instantid.github.io/instantid.github.io
Code: https://github.com/InstantID/InstantID (coming soon)
Project Page: https://instantid.github.io/
Stylized Synthesis
Novel View Synthesis
Stacking Multiple References
Multi-ID Synthesis in Single Style
This is pretty nice, but appears to be limited to faces. I hope we get something, which can preserve body and outfit, in the future. This would be helpful to create stories with consistent characters.
This is one of the first posts I’ve seen that blows my mind about the possibility of AI for art that isn’t just mishmash.
Check out their project page if you haven’t already. There’s a lot more that I didn’t include here.
deleted by creator
deleted by creator