💄Gorgeous: Creating Narrative-Driven Makeup Ideas via Image Prompt 💡

overview

Figure 1: Provide any image prompts, whether a serene landscape or a dynamic scene, \( \textbf{Gorgeous} \) will transform your narrative into creative makeup ideas, perfect for moments when you’re unsure how to convey meaning through makeup.

Abstract

Introducing \( \textbf{Gorgeous} \), a diffusion-based generative method that revolutionizes the makeup industry by empowering user creativity via image prompts. Unlike traditional makeup transfer methods that focus on replicating existing make- ups, Gorgeous, for the first time, empowers users to integrate narrative elements into makeup ideation using image prompts. The result is a makeup concept that vividly reflects user’s expression via images, offering imaginative makeup ideas for physical makeup applications. To achieve this, Gorgeous establishes a foundational framework, ensuring the model learns “what makeup is” before inte- grating narrative elements. A pseudo-pairing strategy, utilizing a face parsing and content-style disentangling network, addresses unpaired data challenges, enabling the model to do makeup training on bare faces. Users can input images repre- senting their ideas (e.g., fire), from which Gorgeous extracts context embeddings to guide our proposed makeup inpainting algorithm, conceptualizing creative, narrative-driven makeup ideas for targeted facial regions. Comprehensive exper- iments underscore the effectiveness of Gorgeous, paving a way for a new dimension in digital makeup artistry and application!

Methodology


overview

Overall architecture of our \( \textbf{Gorgeous} \). Given a set of image prompts, these images are first processed to extract key narrative elements, which are embedded into a placeholder token. This token guides \( MaIP \) to generate makeup ideas for user. However, \( MaIP \) initially struggles to produce makeup-specific outputs, treating target regions as generic holes due to inpainting limitations. To address this, \( MaFor \) ensures the output is makeup-like by: (i) learning “what makeup is” as a pretrained model to be used in \( MaIP \) during inference, and (ii) transforming narrative tokens into makeup-specific representations during inference. This process refines \( MaIP \)’s inpainting to apply narrative-inspired makeup ideas to targeted facial regions, retaining both the essence of makeup and the user’s narrative image inputs.

More Results

Image 3
Image 8
Image 6
Image 4
Image 9
Image 5
Image 1
Image 10
Image 2
Image 7

Want to try Gorgeous on your own face?

Download the code and run it locally with your own images. It's open source and easy to get started!

Try It on GitHub

Citation

@misc{sii2024gorgeouscreatedesiredcharacter,
                    title={Gorgeous: Create Your Desired Character Facial Makeup from Any Ideas}, 
                    author={Jia Wei Sii and Chee Seng Chan},
                    year={2024},
                    eprint={2404.13944},
                    archivePrefix={arXiv},
                    primaryClass={cs.CV},
                    url={https://arxiv.org/abs/2404.13944}, 
              }