Sketch outfits11/21/2023 "This is one of the first successful attempts to mimic the designers' job in the creative process of fashion design and could be a starting point for a capillary adoption of diffusion models in creative industries, oversight by human input," Baldrati, Morelli and their colleagues conclude in their paper.Fashion sketches are an essential part of product development. It could also inform the development of other AI architectures based on LDMs for real-world creative applications. In the future, this new model could be integrated in existing or new software tools for fashion designers. Their model's source code and the multimodal annotations they added to the datasets will soon be released on GitHub. In initial evaluations, the model created by this team of researchers achieved very promising results, creating realistic images of garments on human bodies inspired by human sketches and specific text prompts. "Experimental results on these new datasets demonstrate the effectiveness of our proposal, both in terms of realism and coherence with the given multimodal inputs." "Given the lack of existing datasets suitable for the task, we also extend two existing fashion datasets, namely Dress Code and VITON-HD, with multimodal annotations collected in a semi-automatic manner," Baldrati, Morelli and their colleagues explained in their paper. To effectively train their model, Baldrati, Morelli and their colleagues thus had to first update these existing datasets or create new ones. Most existing datasets for training AI models on fashion design tasks only include low-resolution images of clothing and do not include the information necessary to create fashion images based on text prompts and sketches. Most previous works in this area introduced GAN-based architectures, which generate lower quality images than LDMs. While these promising models have been applied to many tasks that require the generation of artificial images or videos, they have rarely been used in the context of fashion image editing. As they are trained in a compressed and lower-dimensional latent space, LDMs can create high-quality synthetic images. Instead of using generative adversarial networks (GANs), artificial neural network architectures often used to generate new texts or images, the researchers decided to create a framework based on latent diffusion models or LDMs. "We tackle this problem by proposing a new architecture based on latent diffusion models, an approach that has not been used before in the fashion domain." "Differently from previous works that mainly focused on the virtual try-on of garments, we propose the task of multimodal conditioned fashion image editing, guiding the generation of human-centric fashion images by following multimodal prompts, such as text, human body poses, and garment sketches," Alberto Baldrati, Davide Morelli and their colleagues wrote in their paper. This team of Italian researchers, on the other hand, set out to develop a framework that could support the work of designers, showing them how garments they designed might look in real-life, so that they can find new inspiration, identify potential issues and alter their designs if needed. Most past studies exploring the use of AI in the fashion industry focused on computational tools that can recommend garments similar to those selected by a user or models that can show online customers how garments would look on their body (i.e., virtual try-on systems). In a paper pre-published on arXiv, they introduced a new computer vision framework that could help fashion designers to visualize their designs, by showing them how they might look on the human body. Researchers at University of Florence, University of Modena and Reggio Emilia and University of Pisa recently set out to explore the potential of AI models in fashion design.
0 Comments
Leave a Reply.AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |