5.11 更新RED-Edit v1.1(基于ICEdit normal LoRA) 多指令优化
5.11 右侧下载列表中的 Trainning data 工作流更新
---
ICEdit 多模态控制器(IC引导下的图像编辑)
In-Context Edit: Enabling Instructional Image Editing with In-Context Generation in Large Scale Diffusion Transformer
基于 Flux.tools-Fill 最低仅需 6G 显存
近期出现的多模态控制器对C端用户越来越不友好
动不动就要20-30+显存,望而却步
ICEdit 让我们重新相信光
---
RED-Edit 是在 RED-Fill (NSFW) 基础上合并了 ICEdit 训练权重
最低仅需 8步 推理
工作流及模型文件在下载列表中
工作流打包为 "Trainning data" 压缩文件
---
感谢: @river-zhang 及团队成员 浙江大学 & Harvard University
---
Compared with commercial models such as Gemini and GPT-4O, our methods are comparable to and even superior to these commercial models in terms of character ID preservation and instruction following. We are more open-source than them, with lower costs, faster speed (it takes about 9 seconds to process one image), and powerful performance.
---
进阶玩法1 Advanced 1:叠加 Flux.fill LoRas
可以获得更稳定的修改结果:
[Flux Fill] Virtual Try Off 虚拟脱衣
@xiaozhijason / [Flux Fill] Virtual Try Off - alpha
Object Removal Flux Fill v2 物体移除
@xiaozhijason / Object Removal Flux Fill v2 - v2.0
---
进阶玩法2 Advanced 2: 叠加 Flux IC-LoRA
虚拟换装 In-Context 模型地址:
xiaozaa/catvton-flux-lora-alpha at main
感谢 T8x 提到的这个方案,十字鱼 提供 整合包 下载
---
@article{zhang2025ICEdit,
title={In-Context Edit: Enabling Instructional Image Editing with In-Context Generation in Large Scale Diffusion Transformer},
author={Zhang, Zechuan and Xie, Ji and Lu, Yu and Yang, Zongxin and Yang, Yi},
journal={arXiv},
year={2025},
url={https://arxiv.org/abs/2504.20690},
}