Deploy Dedicated GPU server to run AI models

Deploy Model
Skip to main content
- mx5 GGUF 7GB v1 thumbnail

Mx5 GGUF 7GB V1

by ModelsLab

This is a quantized version of my Flux model to run on lower-end graphics cards.

Thanks to @https://civitai.com/user/chrisgoringe243 for quantizing this, it is really good quality for such a small model.

There are larger sized GGUF versions available here: https://huggingface.co/ChrisGoringe/MixedQuantFlux/tree/main

for mid-range graphics cards.

mx5gguf7gbv1
Open Source ModelUnlimited UsageLLMs.txt
API PlaygroundAPI Documentation

Input

Select models...

Per image generation will cost 0.0047$
For premium plan image generation will cost 0.00$ i.e Free.

Output

Idle

Unknown content type

Mx5 GGUF 7GB V1 Readme

This is a quantized version of my Flux model to run on lower-end graphics cards.

Thanks to @https://civitai.com/user/chrisgoringe243 for quantizing this, it is really good quality for such a small model.

There are larger sized GGUF versions available here: https://huggingface.co/ChrisGoringe/MixedQuantFlux/tree/main

for mid-range graphics cards.