Skip to content

Results of AI image colorization methods/models tested by me. Sources are provided.

License

Notifications You must be signed in to change notification settings

Courage-1984/ai-image-colorization-results

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

54 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

AI Image Colorization Results

This repository contains the results of various AI Colorization methods/models tested by me.

Table of Contents


Introduction

I tested 11 different Github repo implementations/models/implementations/software of AI Colorization. I tried to stick to FOSS sources but 2 of the implementations are not, DeOldify's MyHeritage In Color uses a closed model & the other one is Adobe Photoshop which obviously is not FOSS.

If you only want to see the results visit: results.md

note: the results I present are not definitive, you could perhaps get better results with tweaking the implementation's settings/prompts/parameters etc as well as using the manual methods where available.

I tested the following:


My Recommendations

Some people might come here to just find the best colorization method/model/implementation/software, so I'll provide my recommendations here at the beginning.

Scroll to the bottom or click here to go to 'My Recommendations' Showcase.

I do invite you though to read the rest of the README.md as it is quite interesting and informative and you might find something you wanted that is perhaps not here in 'My Recommendations'. Remember 'My Recommendations' is subjective and the testing/evaluations was done on a small dataset and with a basic understanding of the different implementations' scripts.

So here are my recommendations: (Jump to Results)

  1. MyHeritage In Color
  2. DeOldify.NET (stable)
  3. Adobe Photoshop
  4. DDColor (modelscope)
  5. Interactive Deep Colorization (manual)

Results

DDColor | Repo | Model Zoo

DDColor offers 4 models (modelscope, artistic, paper & paper_tiny), of which I used 3.

I ran them using chaiNNer and I was quite pleased with the Modelscope results.

Original Modelscope Artistic Paper
Original Modelscope Artistic Paper
Original Modelscope Artistic Paper

iColoriT offers 3 models (Base Model (ViT-B), Small Model (ViT-S) & Tiny Model (ViT-Ti)).

I ran iColoriT two different ways:

  1. The first was where I used all 3 models and had it auto colour the image for me.
  2. The second I used the GUI and manually tried my best to colour it in with the hints it provided. I only used the small model for this method.

Only issue I found that with the iColoriT, how it's set up, the resulting image you save is not at the full resolution of the original image which is not ideal. You will have to edit the scripts to get the resulting image at full resolution.

iColoriT Results:

First Method:

Original Base Model (ViT-B) Small Model (ViT-S) Tiny Model (ViT-Ti)
Base Model (ViT-B) Small Model (ViT-S) Tiny Model (ViT-Ti)
Base Model (ViT-B) Small Model (ViT-S) Tiny Model (ViT-Ti)

Second Method:

Original Small Model (ViT-S)
Modelscope
Modelscope

Text-Guided-Image-Colorization | Repo | Download Pre-trained Models

Text-Guided-Image-Colorization uses a ControlNet Model, Image Captioning Model, base model & Checkpoint. Most are sdxl models.

I ran it with the default settings (1st method) as well as alt settings (2nd method) to the default. I also used chatgpt to create prompts for me for the images (I'm not too good at that), so you could perhaps get better results than me with better prompts.

Here were my settings:

1st method:

Select ControlNet Model: sdxl_light_caption_output/checkpoint-30000/controlnet
Select Image Captioning Model: blip-image-captioning-large
positive prompt: chatgpt
negative: low quality, bad quality, low contrast, black and white, bw, monochrome, grainy, blurry, historical, restored, desaturate
seed: 123
Steps: 8
fp16
base model: stabilityai/stable-diffusion-xl-base-1.0
Repository: ByteDance/SDXL-Lightning
Checkpoint: sdxl_lightning_8step_unet.safetensors

2nd method:

Select ControlNet Model: sdxl_light_custom_caption_output/checkpoint-30000/controlnet
Select Image Captioning Model: blip-image-captioning-base
positive prompt: chatgpt
negative: low quality, bad quality, low contrast, black and white, bw, monochrome, grainy, blurry, historical, restored, desaturate
seed: 123
Steps: 8
fp16
base model: stabilityai/stable-diffusion-xl-base-1.0
Repository: ByteDance/SDXL-Lightning
Checkpoint: sdxl_lightning_8step_unet.safetensors

Here were my prompts:

1st image prompt:

A black-and-white vintage wedding portrait of a couple. The groom is wearing a black suit, white shirt, and a black tie, with a white flower boutonniere on his left lapel. He has short dark hair and glasses. The bride is dressed in an ornate white wedding gown with lace detailing and long sleeves. She wears a white floral crown over her dark hair styled in a voluminous updo, with a sheer white veil cascading behind her. She holds a bouquet of white and light-colored flowers with trailing stems. The background is plain and neutral.

2nd image prompt:

A black-and-white photo of an older couple standing in front of a classic Volkswagen Beetle car. The man on the right is wearing glasses, a light-colored short-sleeved shirt, dark trousers, and dark shoes. He has a beard and is smiling. The woman on the left is wearing a light blouse, light-colored pants, and white slip-on shoes. She holds a small purse over her left arm and stands close to the man with her hand on his shoulder. The background features a suburban setting with brick houses, tiled roofs, and a driveway. The car has a light exterior and visible striped upholstery inside.

Text-Guided-Image-Colorization Results:

Original 1st Method 2nd Method
Original 1st Method 2nd Method

The following 3 implementations (Colorful Image Colorization, Image Video Colorization & Interactive Deep Colorization) all use methods and models presented at ECCV 2016 and SIGGRAPH 2017


Colorful Image Colorization | Repo

This implementation does the colorization automatically through a demo script and then presents/saves for you a ECCV 2016 and SIGGRAPH 2017 colorized version of your input image.

Original ECCV SIGGRAPH
Original ECCV SIGGRAPH

Image Video Colorization | Repo

This implementation is a bit more user friendly then the previous one has it offers a GUI. It also offers extra functionality where you can colorize input videos and YouTube links/videos along with the option of colorizing images.

The image colorization results is the same as the previous implementation. I did not test videos.

Original ECCV SIGGRAPH
Original ECCV SIGGRAPH

Interactive Deep Colorization | Repo

This implementation also provides a GUI. It initially does an auto colorization of your input image but then you can also manually colorize your input image. What is nice when you manually colorize your input image is that it provides you with colour hints and a color gamut which both are this implementation's estimation of what it thinks the colour should be at the point which you selected on the input image.

This implementation uses model from SIGGRAPH 2017.

Original Auto Manual
Original Auto Manual

This implementation can be used from a couple of places, each producing different results.

I have tested the following 5 methods:

  1. Stable Diffusion Web UI Plugin - Jump to the results
  2. DeOldify Image Colorization on DeepAI - Jump to the results
  3. MyHeritage In Color - Jump to the results
  4. Google Colab - Stable | Google Colab - Artistic - Jump to the results
  5. DeOldify.NET - Jump to the results

Here is the descriptions of the 5 methods provided by the repo:

  1. Stable Diffusion Web UI Plugin: Stable Diffusion Web UI Plugin- Photos and video, cross-platform (NEW!).
  2. DeOldify Image Colorization on DeepAI: Quick Start: The easiest way to colorize images using open source DeOldify (for free!).
  3. MyHeritage In Color: The most advanced version of DeOldify image colorization is available here, exclusively. Try a few images for free!
  4. Google Colab - Stable & Artistic: no real description provided.
  5. DeOldify.NET: ColorfulSoft Windows GUI- Photos/Windows only (No GPU required!).

DeOldify Results:

Stable Diffusion Web UI Plugin: | Link

For this method there is 4 results as I used all the various possible combinations of the settings of this plugin.

The DeOldify Stable Diffusion Web UI Plugin Settings were as follows:

1st combination: "render_factor=42, artistic=False, pre_decolorization=False"
2nd combination: "render_factor=42, artistic=False, pre_decolorization=True"
3rd combination: "render_factor=42, artistic=True, pre_decolorization=False"
4th combination: "render_factor=42, artistic=True, pre_decolorization=True"
Original 1st combination 2nd combination 3rd combination 4th combination
1st combination 2nd combination 3rd combination 4th combination
1st combination 2nd combination 3rd combination 4th combination

DeOldify Image Colorization on DeepAI: | Link

Unfortunately this doesn't give your image back at full resolution.

Original Colorized Result
Colorized Result
Colorized Result

MyHeritage In Color: | Link

Unfortunately this doesn't give your image back at full resolution & has a watermark.

Original Colorized Result
Colorized Result
Colorized Result

Google Colab: | Link 1 | Link 2

Easy to run in your browser, you just have to provide a link (source_url) to your image before initiating Runtime. Additionally what is nice is this colab also produces multiple outputs generated with different render_factor values (10>38, incrementing by 2).

I am only providing the result generated with render_factor of 35. All the results can be seen here: Google Colab - Stable & Google Colab - Artistic.

Original Stable 35 Artistic 35
Original Stable 35 Artistic 35
Original Stable 35 Artistic 35

DeOldify.NET: | Link

You run scripts to create .exe files which each uses different models. The .exe then gives you a nice GUI to select input image, 'DeOldify' and save output image.

Here I only provide the results of the .exe using Artistic colorizer with float32 weights and Stable colorizer with float32 weights as the results from the different models are quite similar. All the results can be seen here: Stable & Artistic.

Original Stable float32 Artistic float32
Original Stable float32 Artistic float32
Original Stable float32 Artistic float32

Adobe Photoshop | Link

Using the Colorize Neural Filter from Adobe Photoshop you can get quite good results. The results I present is the default automatic colorization but you can adjust and edit the results manually to your liking.

Original Adobe Photoshop Neural Filters - Colorize
Original Adobe Photoshop_Neural Filters_Colorize
Original Adobe Photoshop_Neural Filters_Colorize

BigColor is a bit of a step-up from other implementations as it uses the models presented at ECCV 2022.

It also provides 4 scripts with the following descriptions:

infer.bigcolor.e011.sh : ImageNet1K Validation : Use this to get the same inference results as used in the paper.

colorize.real.sh : Real Gray Colorization : Use this to colorize a real grayscale image with arbitrary resolution.

colorize.multi_c.sh : Multi-modal Solutions : Use this to test the multiple solutions from a input (using class vector c).

colorize.multi_z.sh : Multi-modal Solutions : Use this to test the multiple solutions from a input (using random vector z).

BigColor also uses 'Wordnet' class classification to generate the output image which can be modified to your liking. infer.bigcolor.e011.sh uses a hard-coded class, colorize.real.sh actually determines the content of the image and then uses appropriate classes, colorize.multi_c.sh uses a set of random hard-coded classes and colorize.multi_z.sh uses a single hard-coded class.

^ These scripts are setup to use datasets already provided by the repo but I modified the implementation to use my own 2 B&W images. If you can grasp how these scripts work then you could perhaps modify the logic and perhaps get better results than what I got.

You can check the file names of the BigColor results in this repo to see the parameters/arguments and classes used to generate them.

BigColor Results:

infer.bigcolor.e011.sh:

Original infer.bigcolor.e011.sh
Original infer.bigcolor.e011.sh
Original infer.bigcolor.e011.sh

colorize.real.sh:

Original colorize.real.sh
Original colorize.real.sh
Original colorize.real.sh

colorize.multi_c.sh:

Original colorize.multi_c.sh colorize.multi_c.sh colorize.multi_c.sh colorize.multi_c.sh colorize.multi_c.sh
colorize.multi_c.sh colorize.multi_c.sh colorize.multi_c.sh colorize.multi_c.sh colorize.multi_c.sh
colorize.multi_c.sh colorize.multi_c.sh colorize.multi_c.sh colorize.multi_c.sh colorize.multi_c.sh

colorize.multi_z.sh:

This script produced 20 results, I am only showing here 4. All the results can be seen here: Image 1 & Image 2

Original colorize.multi_z.sh colorize.multi_z.sh colorize.multi_z.sh colorize.multi_z.sh
colorize.multi_z.sh colorize.multi_z.sh colorize.multi_z.sh colorize.multi_z.sh
colorize.multi_z.sh colorize.multi_z.sh colorize.multi_z.sh colorize.multi_z.sh

Other Open Models (old)

These models are very old and don't produce very good results but I thought I would include them for completeness.

I ran them using chaiNNer.

BS_Colorizer/Vapourizer | Model

Model description:

B/W | 100% Desaturated images. It mostly results in Blue and Yellow images with slight hints of Green, Orange and Magenta. You are free to use this as a pretrain to achieve better results.

Original BS_Colorizer/Vapourizer
Original BS_Colorizer/Vapourizer
Original BS_Colorizer/Vapourizer

SpongeColor Lite | Model

Model description:

The first attempt at ESRGAN colorization that produces more than 2 colors. Doesn't work that great but it was a neat experiment.

Original SpongeColor Lite
Original SpongeColor Lite
Original SpongeColor Lite

Hope you found this useful!


My Recommendations Showcase

Original 1st: MyHeritage In Color 2nd: DeOldify.NET (stable) 3rd: Adobe Photoshop 4th: DDColor (modelscope) 5th: Interactive Deep Colorization (manual)

Extra

Here is another nice Colorization Showcase


License

Creative Commons Attribution Share Alike 4.0 International (CC-BY-SA-4.0)

AI Image Colorization Results © 2025 by Courage (Courage-1984) is licensed under CC-BY-SA-4.0


About

Results of AI image colorization methods/models tested by me. Sources are provided.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published