This repository contains the results of various AI Colorization methods/models tested by me.
- Introduction
- My Recommendations
- Results
- My Recommendations Showcase
- Extra
- License
I tested 11 different Github repo implementations
/models
/implementations
/software
of AI Colorization. I tried to stick to FOSS sources but 2 of the implementations are not, DeOldify's MyHeritage In Color uses a closed model & the other one is Adobe Photoshop which obviously is not FOSS.
If you only want to see the results visit: results.md
note: the results I present are not definitive, you could perhaps get better results with tweaking the implementation's settings/prompts/parameters etc as well as using the manual methods where available.
I tested the following:
- https://github.com/piddnad/DDColor - Jump to the results
- https://github.com/pmh9960/iColoriT - Jump to the results
- https://github.com/nick8592/text-guided-image-colorization - Jump to the results
- https://github.com/richzhang/colorization - Jump to the results
- https://github.com/Wazzabeee/image-video-colorization - Jump to the results
- https://github.com/junyanz/interactive-deep-colorization - Jump to the results
- https://github.com/jantic/DeOldify - Jump to the results
- https://www.adobe.com/products/photoshop.html - Jump to the results
- https://github.com/KIMGEONUNG/BigColor - Jump to the results
- https://openmodeldb.info/models/1x-BS-Colorizer - Jump to the results
- https://openmodeldb.info/models/1x-SpongeColor-Lite - Jump to the results
Some people might come here to just find the best colorization method/model/implementation/software, so I'll provide my recommendations here at the beginning.
Scroll to the bottom or click here to go to 'My Recommendations' Showcase.
I do invite you though to read the rest of the README.md as it is quite interesting and informative and you might find something you wanted that is perhaps not here in 'My Recommendations'. Remember 'My Recommendations' is subjective and the testing/evaluations was done on a small dataset and with a basic understanding of the different implementations' scripts.
So here are my recommendations: (Jump to Results)
- MyHeritage In Color
- DeOldify.NET (stable)
- Adobe Photoshop
- DDColor (modelscope)
- Interactive Deep Colorization (manual)
DDColor offers 4 models (modelscope
, artistic
, paper
& paper_tiny
), of which I used 3.
I ran them using chaiNNer and I was quite pleased with the Modelscope results.
Original | Modelscope | Artistic | Paper |
---|---|---|---|
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
iColoriT | Repo | Pretrained iColoriT Checkpoints
iColoriT offers 3 models (Base Model (ViT-B)
, Small Model (ViT-S)
& Tiny Model (ViT-Ti)
).
I ran iColoriT two different ways:
- The first was where I used all 3 models and had it auto colour the image for me.
- The second I used the GUI and manually tried my best to colour it in with the hints it provided. I only used the
small
model for this method.
Only issue I found that with the iColoriT, how it's set up, the resulting image you save is not at the full resolution of the original image which is not ideal. You will have to edit the scripts to get the resulting image at full resolution.
Original | Base Model (ViT-B) | Small Model (ViT-S) | Tiny Model (ViT-Ti) |
---|---|---|---|
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Original | Small Model (ViT-S) |
---|---|
![]() |
![]() |
![]() |
![]() |
Text-Guided-Image-Colorization | Repo | Download Pre-trained Models
Text-Guided-Image-Colorization uses a ControlNet Model
, Image Captioning Model
, base model
& Checkpoint
. Most are sdxl models.
I ran it with the default settings (1st method) as well as alt settings (2nd method) to the default. I also used chatgpt to create prompts for me for the images (I'm not too good at that), so you could perhaps get better results than me with better prompts.
Here were my settings:
Select ControlNet Model: sdxl_light_caption_output/checkpoint-30000/controlnet
Select Image Captioning Model: blip-image-captioning-large
positive prompt: chatgpt
negative: low quality, bad quality, low contrast, black and white, bw, monochrome, grainy, blurry, historical, restored, desaturate
seed: 123
Steps: 8
fp16
base model: stabilityai/stable-diffusion-xl-base-1.0
Repository: ByteDance/SDXL-Lightning
Checkpoint: sdxl_lightning_8step_unet.safetensors
Select ControlNet Model: sdxl_light_custom_caption_output/checkpoint-30000/controlnet
Select Image Captioning Model: blip-image-captioning-base
positive prompt: chatgpt
negative: low quality, bad quality, low contrast, black and white, bw, monochrome, grainy, blurry, historical, restored, desaturate
seed: 123
Steps: 8
fp16
base model: stabilityai/stable-diffusion-xl-base-1.0
Repository: ByteDance/SDXL-Lightning
Checkpoint: sdxl_lightning_8step_unet.safetensors
Here were my prompts:
A black-and-white vintage wedding portrait of a couple. The groom is wearing a black suit, white shirt, and a black tie, with a white flower boutonniere on his left lapel. He has short dark hair and glasses. The bride is dressed in an ornate white wedding gown with lace detailing and long sleeves. She wears a white floral crown over her dark hair styled in a voluminous updo, with a sheer white veil cascading behind her. She holds a bouquet of white and light-colored flowers with trailing stems. The background is plain and neutral.
A black-and-white photo of an older couple standing in front of a classic Volkswagen Beetle car. The man on the right is wearing glasses, a light-colored short-sleeved shirt, dark trousers, and dark shoes. He has a beard and is smiling. The woman on the left is wearing a light blouse, light-colored pants, and white slip-on shoes. She holds a small purse over her left arm and stands close to the man with her hand on his shoulder. The background features a suburban setting with brick houses, tiled roofs, and a driveway. The car has a light exterior and visible striped upholstery inside.
Original | 1st Method | 2nd Method |
---|---|---|
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
The following 3 implementations (Colorful Image Colorization, Image Video Colorization & Interactive Deep Colorization) all use methods and models presented at ECCV 2016 and SIGGRAPH 2017
Colorful Image Colorization | Repo
This implementation does the colorization automatically through a demo script and then presents/saves for you a ECCV 2016 and SIGGRAPH 2017 colorized version of your input image.
Original | ECCV | SIGGRAPH |
---|---|---|
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Image Video Colorization | Repo
This implementation is a bit more user friendly then the previous one has it offers a GUI. It also offers extra functionality where you can colorize input videos and YouTube links/videos along with the option of colorizing images.
The image colorization results is the same as the previous implementation. I did not test videos.
Original | ECCV | SIGGRAPH |
---|---|---|
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Interactive Deep Colorization | Repo
This implementation also provides a GUI. It initially does an auto colorization of your input image but then you can also manually colorize your input image. What is nice when you manually colorize your input image is that it provides you with colour hints and a color gamut which both are this implementation's estimation of what it thinks the colour should be at the point which you selected on the input image.
This implementation uses model from SIGGRAPH 2017.
Original | Auto | Manual |
---|---|---|
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
DeOldify | Repo | Pretrained Weights
This implementation can be used from a couple of places, each producing different results.
I have tested the following 5 methods:
- Stable Diffusion Web UI Plugin - Jump to the results
- DeOldify Image Colorization on DeepAI - Jump to the results
- MyHeritage In Color - Jump to the results
- Google Colab - Stable | Google Colab - Artistic - Jump to the results
- DeOldify.NET - Jump to the results
Here is the descriptions of the 5 methods provided by the repo:
- Stable Diffusion Web UI Plugin: Stable Diffusion Web UI Plugin- Photos and video, cross-platform (NEW!).
- DeOldify Image Colorization on DeepAI: Quick Start: The easiest way to colorize images using open source DeOldify (for free!).
- MyHeritage In Color: The most advanced version of DeOldify image colorization is available here, exclusively. Try a few images for free!
- Google Colab - Stable & Artistic: no real description provided.
- DeOldify.NET: ColorfulSoft Windows GUI- Photos/Windows only (No GPU required!).
Stable Diffusion Web UI Plugin: | Link
For this method there is 4 results as I used all the various possible combinations of the settings of this plugin.
The DeOldify Stable Diffusion Web UI Plugin Settings were as follows:
1st combination: "render_factor=42, artistic=False, pre_decolorization=False"
2nd combination: "render_factor=42, artistic=False, pre_decolorization=True"
3rd combination: "render_factor=42, artistic=True, pre_decolorization=False"
4th combination: "render_factor=42, artistic=True, pre_decolorization=True"
Original | 1st combination | 2nd combination | 3rd combination | 4th combination |
---|---|---|---|---|
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
DeOldify Image Colorization on DeepAI: | Link
Unfortunately this doesn't give your image back at full resolution.
Original | Colorized Result |
---|---|
![]() |
![]() |
![]() |
![]() |
MyHeritage In Color: | Link
Unfortunately this doesn't give your image back at full resolution & has a watermark.
Original | Colorized Result |
---|---|
![]() |
![]() |
![]() |
![]() |
Easy to run in your browser, you just have to provide a link (source_url
) to your image before initiating Runtime. Additionally what is nice is this colab also produces multiple outputs generated with different render_factor
values (10>38, incrementing by 2).
I am only providing the result generated with render_factor
of 35
. All the results can be seen here: Google Colab - Stable & Google Colab - Artistic.
Original | Stable 35 | Artistic 35 |
---|---|---|
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
DeOldify.NET: | Link
You run scripts to create .exe
files which each uses different models. The .exe
then gives you a nice GUI to select input image, 'DeOldify' and save output image.
Here I only provide the results of the .exe
using Artistic colorizer with float32 weights
and Stable colorizer with float32 weights
as the results from the different models are quite similar. All the results can be seen here: Stable & Artistic.
Original | Stable float32 | Artistic float32 |
---|---|---|
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Adobe Photoshop | Link
Using the Colorize
Neural Filter
from Adobe Photoshop you can get quite good results. The results I present is the default automatic colorization but you can adjust and edit the results manually to your liking.
Original | Adobe Photoshop Neural Filters - Colorize |
---|---|
![]() |
![]() |
![]() |
![]() |
BigColor | Repo | Download Pre-trained Models
BigColor is a bit of a step-up from other implementations as it uses the models presented at ECCV 2022.
It also provides 4 scripts with the following descriptions:
infer.bigcolor.e011.sh : ImageNet1K Validation : Use this to get the same inference results as used in the paper.
colorize.real.sh : Real Gray Colorization : Use this to colorize a real grayscale image with arbitrary resolution.
colorize.multi_c.sh : Multi-modal Solutions : Use this to test the multiple solutions from a input (using class vector c).
colorize.multi_z.sh : Multi-modal Solutions : Use this to test the multiple solutions from a input (using random vector z).
BigColor also uses 'Wordnet' class classification to generate the output image which can be modified to your liking. infer.bigcolor.e011.sh uses a hard-coded class, colorize.real.sh actually determines the content of the image and then uses appropriate classes, colorize.multi_c.sh uses a set of random hard-coded classes and colorize.multi_z.sh uses a single hard-coded class.
^ These scripts are setup to use datasets already provided by the repo but I modified the implementation to use my own 2 B&W images. If you can grasp how these scripts work then you could perhaps modify the logic and perhaps get better results than what I got.
You can check the file names of the BigColor results in this repo to see the parameters/arguments and classes used to generate them.
Original | infer.bigcolor.e011.sh |
---|---|
![]() |
![]() |
![]() |
![]() |
Original | colorize.real.sh |
---|---|
![]() |
![]() |
![]() |
![]() |
Original | colorize.multi_c.sh | colorize.multi_c.sh | colorize.multi_c.sh | colorize.multi_c.sh | colorize.multi_c.sh |
---|---|---|---|---|---|
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
This script produced 20 results, I am only showing here 4. All the results can be seen here: Image 1 & Image 2
Original | colorize.multi_z.sh | colorize.multi_z.sh | colorize.multi_z.sh | colorize.multi_z.sh |
---|---|---|---|---|
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
These models are very old and don't produce very good results but I thought I would include them for completeness.
I ran them using chaiNNer.
BS_Colorizer/Vapourizer | Model
Model description:
B/W | 100% Desaturated images. It mostly results in Blue and Yellow images with slight hints of Green, Orange and Magenta. You are free to use this as a pretrain to achieve better results.
Original | BS_Colorizer/Vapourizer |
---|---|
![]() |
![]() |
![]() |
![]() |
SpongeColor Lite | Model
Model description:
The first attempt at ESRGAN colorization that produces more than 2 colors. Doesn't work that great but it was a neat experiment.
Original | SpongeColor Lite |
---|---|
![]() |
![]() |
![]() |
![]() |
Original | 1st: MyHeritage In Color | 2nd: DeOldify.NET (stable) | 3rd: Adobe Photoshop | 4th: DDColor (modelscope) | 5th: Interactive Deep Colorization (manual) |
---|---|---|---|---|---|
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Here is another nice Colorization Showcase
Creative Commons Attribution Share Alike 4.0 International (CC-BY-SA-4.0)
AI Image Colorization Results © 2025 by Courage (Courage-1984) is licensed under CC-BY-SA-4.0