RedCraft | 红潮 CADS | UPdated-May11 | Commercial & Advertising Design System - RED-UNO(In-Context)
by ModelsLabREDAIGC FT Model used to match UNO In-Context Generation
(with improved quality compared to F.1 dev)
---
Diffusers 脚本:
https://github.com/bytedance/UNO
Dit-LoRA 权重:
bytedance-research/UNO · Hugging Face
ComfyUI-nodes 组件:
HM-RunningHub/ComfyUI_RH_UNO: This is a UNO ComfyUI plugin
---
propose a highly-consistent data synthesis pipeline to tackle this challenge. This pipeline harnesses the intrinsic in-context generation capabilities of diffusion transformers and generates high-consistency multi-subject paired data. Additionally, we introduce UNO, which consists of progressive cross-modal alignment and universal rotary position embedding. It is a multi-image conditioned subject-to-image model iteratively trained from a text-to-image model. Extensive experiments show that our method can achieve high consistency while ensuring controllability in both single-subject and multi-subject driven generation.
redcraftcadsupdatedmay11commercialadvertisingdesignsystem-redunoincontextInput
Per image generation will cost 0.0047$
For premium plan image generation will cost 0.00$ i.e Free.
Output
Unknown content type
Related Models
Discover similar models you might be interested in
RedCraft | 红潮 CADS | UPdated-May11 | Commercial & Advertising Design System - RED-UNO(In-Context) Readme
REDAIGC FT Model used to match UNO In-Context Generation
(with improved quality compared to F.1 dev)
---
Diffusers 脚本:
https://github.com/bytedance/UNO
Dit-LoRA 权重:
bytedance-research/UNO · Hugging Face
ComfyUI-nodes 组件:
HM-RunningHub/ComfyUI_RH_UNO: This is a UNO ComfyUI plugin
---
propose a highly-consistent data synthesis pipeline to tackle this challenge. This pipeline harnesses the intrinsic in-context generation capabilities of diffusion transformers and generates high-consistency multi-subject paired data. Additionally, we introduce UNO, which consists of progressive cross-modal alignment and universal rotary position embedding. It is a multi-image conditioned subject-to-image model iteratively trained from a text-to-image model. Extensive experiments show that our method can achieve high consistency while ensuring controllability in both single-subject and multi-subject driven generation.





![Legend of Korra (Series Style) [Illustrious & NoobAI & Pony] - Opal SD1.5](https://images.stablediffusionapi.com/?Image=https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/1908ef5c-4221-41a7-b409-cc55a2802af0/width=1024/8505733.jpeg)











