Image Edit just about anything – locally and free! #
Black Forest Labs has released Flux Kontext Dev via their non-commercial license . While that license isn’t great if you want to use this commercially, the Pro version, which can be accessed via paid API is available for commercial use. This gives people (like you and me) the opportunity to play with this technology and to become good at understanding how to get the best results and where its limitations are.
What is Kontext? #
Flux Kontext Dev is an open-weight image editing model designed to allow you to alter, combine and otherwise manipulate images while still preserving the parts of them you would like to.
Here’s an example:
Original Image:
Prompt: Remove all x-wing fighters, remove all lasers, replace all guns with baseball bats, dress the two men and woman in baseball uniforms, add a bunch of birds to the outer space region, preserve the remainder of the original painting.
Where can I try out Flux Dev? #
If you don’t want to take the time, or don’t have the VRAM (12-24GB required) to run Flux Kontext locally, you have a few options.
- Flux1.ai – (Easy Difficulty) has a web interface you can try it out at, but you’re going to need to buy some credits.
- RunPod – (Medium Difficulty) A service where you can rent an online server with a GPU using Jupyter Notebooks. Note: These area actually dirt cheap to run – video instructions below
- Hook into the Flux Kontext Pro API – (Medium Difficulty) for the commercially available best variation of the model.
Another example:
Original Image:
Prompt: Have the woman hold a hamburger in her left hand and hold a takeout drink in her right hand, preserve the remainder of this painting.
How do I run this locally? #
The best way to run Kontext locally is using Comfy UI. You can find a full installation guide here or just my quick and dirty version below:
How to Install ComfyUI #
For Windows
- Install 7-Zip: Use it to extract the ComfyUI zip file.
- Download ComfyUI: Get the standalone version from the provided link.
- Extract the File: Use 7-Zip to extract the downloaded
.7z
file. - Download a Checkpoint Model: Place it in the
ComfyUI_windows_portable\models\checkpoints
folder. - Start ComfyUI:
- If you have an NVIDIA GPU, run
run_nvidia_gpu.bat
. - If you don’t, run
run_cpu.bat
(this will be slower).
- If you have an NVIDIA GPU, run
For Mac (M1/M2)
- Install Homebrew: Use the terminal to install Homebrew.
- Install Required Packages: Run the command
brew install cmake protobuf rust [email protected] git wget
. - Clone ComfyUI: Use
git clone https://github.com/comfyanonymous/ComfyUI
. - Create a Virtual Environment: Run
python3 -m venv venv
. - Install PyTorch: Use
./venv/bin/pip install torch torchvision torchaudio
. - Install Required Packages: Run
./venv/bin/pip install -r requirements.txt
. - Download a Stable Diffusion Model: Place it in the
models/checkpoints
folder. - Start ComfyUI: Run
./venv/bin/python main.py
.
Updating ComfyUI
- Windows: Run
update_comfyui.bat
in the ComfyUI folder. - Mac: Use
git pull
in the ComfyUI directory.
How to Install Kontext on Comfy UI #
Once you have Comfy UI set up on your computer, it’s time to install Kontext. Good news is that it’s not too rough, Comfy UI has a guide that will get you up and running in no time. Comfy UI How-To Setup Guide for Kontext . Or just follow my quick and dirty guide below:
Installing and Using FLUX.1 Kontext Dev in ComfyUI: A Quick Guide
Here’s a breakdown to get you started with FLUX.1 Kontext Dev in ComfyUI. It’s not quite installation, as it’s more about downloading the necessary models and loading them within ComfyUI.
1. Model Download:
You’ll need these model files. Make sure you have a stable internet connection!
- Diffusion Model:
flux1-dev-kontext_fp8_scaled.safetensors
- VAE:
ae.safetensors
- Text Encoder:
clip_l.safetensors
ort5xxl_fp16.safetensors
ort5xxl_fp8_e4m3fn_scaled.safetensors
2. Model Placement:
Place the downloaded files in the correct directories within your ComfyUI installation:
ComfyUI/
└── models/
├── diffusion_models/
│ └── flux1-dev-kontext_fp8_scaled.safetensors
├── vae/
│ └── ae.safetensors
└── text_encoders/
├── clip_l.safetensors
├── t5xxl_fp16.safetensors (or t5xxl_fp8_e4m3fn_scaled.safetensors)
3. Loading the Models in ComfyUI:
- Load the Diffusion Model: In your ComfyUI workflow, use the “Load Diffusion Model” node and point it to
flux1-dev-kontext_fp8_scaled.safetensors
. - Load the VAE: Use the “Load VAE” node and point it to
ae.safetensors
. - Load the Text Encoder: Use the “DualCLIP Load” node and load both
clip_l.safetensors
and eithert5xxl_fp16.safetensors
ort5xxl_fp8_e4m3fn_scaled.safetensors
.
4. Workflow Selection:
You can either use the pre-made workflows available in the documentation (basic or grouped) or build your own. Remember that the documentation recommends using the latest Development (Nightly) version of ComfyUI to ensure you have the latest features and fixes.
Important Notes:
- Make sure you’re using the correct file extensions.
- Double-check the file paths in your ComfyUI nodes.
- If you encounter issues, ensure you are using a recent version of ComfyUI (Development/Nightly).
That should get you going. Now go forth and edit some images! Just don’t blame me if your cat suddenly has a handlebar mustache.
Another example:
Original Images:
Prompt: The robed man and the man in a suit shake hands in the oval office
Prompt: The robed man holds a beer and the man in the suit holds a water bong in a nightclub
Another example:
Original Image:
Prompt: Add a cyberpunk tech armored dinosaur to the room behind the people, retain the remainder of the painting.
That’s about it! Have fun!