Skip to content

6 Color Transfer / Mismatch Correction methods in Python including [arXiv:2303.06657] “Color Mismatches in Stereoscopic Video: Real-World Dataset and Deep Correction Method,” 2024

License

Notifications You must be signed in to change notification settings

egorchistov/color-transfer

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Color Mismatches in Stereoscopic Video: Real-World Dataset and Deep Correction Method

Frame #1,200 from video “VR180 Cameras with Daydream,” taken by Google, contains color mismatches.

Color-mismatch correction is the task of transferring color from one view of a stereopair to the corresponding areas in another where the colors differ incorrectly.

This repo contains two datasets and six color-transfer methods.

What’s New

  • 08.08.2024 The arXiv v3 version has been published
  • 15.05.2024 We have updated the comparison methodology and improved our color-transfer method
  • 15.06.2023 We were not allowed to attend the EUSIPCO 2023 conference because of our affiliation
  • 15.06.2023 The arXiv v2 version has been updated with FSIM and iCID results
  • 29.05.2023 Our work was accepted for the EUSIPCO 2023 conference
  • 12.03.2023 The arXiv v1 version has been published

Demo

Given an image pair or a video sequence, our code supports generating color-transfer results. Refer to jupyter notebook for example usages.

Results of the color transfer from the reference image to the target image on a stereopair from InStereo2K. The hue of the target image was adjusted using the maximum magnitude (+0.5). Neural network-based methods (Croci et al. and ours), that were trained on such distortions, have successfully transferred the colors.

Installation

Clone this repo and install dependencies:

git clone https://github.com/egorchistov/color-transfer.git
cd color-transfer
pip install -qr requirements.txt

Datasets

We created the following datasets to train and evaluate available models:

Training

Download the datasets and use these commands to start training:

python -m utils.cli fit --config configs/dcmcs3di.yaml
python -m utils.cli fit --config configs/dmsct.yaml

Refer to WandB for training history and weights of the trained models.

Evaluation

Download the specified weights and use these commands to start the evaluation:

python -m utils.cli test --config configs/dcmcs3di.yaml --ckpt_path color-transfer/y1mq1usg/checkpoints/epoch\=96-step\=10185.ckpt --trainer.logger false
python -m utils.cli test --config configs/dmsct.yaml --ckpt_path color-transfer/86n1v9bd/checkpoints/epoch\=72-step\=7665.ckpt --trainer.logger false
python -m utils.cli test --config configs/others.yaml --model.func_spec "methods.linear.color_transfer_between_images"  # and so on

Results

On the artificial dataset, our method was ranked the best by the all quality-assessment methods. However, on the real-world data, all non-global methods, which consider the different non-trivial distortion models, performed worse than the global methods. This discrepancy is likely due to the domain shift between the distortion model used in methods’ development and the real-world distortion model.

Comparison of eight color-mismatch-correction methods on two datasets. The best result appears in bold.

Citation

If you find our work useful, please cite the following paper:

@article{chistov2024color,
  title={Color Mismatches in Stereoscopic Video: Real-World Dataset and Deep Correction Method},
  author={Chistov, Egor and Alutis, Nikita and Vatolin, Dmitriy},
  howpublished={arXiv preprint arXiv:2303.06657},
  year={2024}
}

See Also

About

6 Color Transfer / Mismatch Correction methods in Python including [arXiv:2303.06657] “Color Mismatches in Stereoscopic Video: Real-World Dataset and Deep Correction Method,” 2024

Topics

Resources

License

Stars

Watchers

Forks