Skip to content

talebolano/Contourformer

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

ContourFormer:Real-Time Contour-Based End-to-End Instance Segmentation Transformer

license arXiv

📄 This is the official implementation of the paper:
ContourFormer:Real-Time Contour-Based End-to-End Instance Segmentation Transformer

Weights

Download the pretrained model from Google Drive.

Quick start

Setup

conda create -n contourformer python=3.11.9
conda activate contourformer
pip install -r requirements.txt

Data Preparation

COCO2017 Dataset
  1. Download COCO2017 from OpenDataLab or COCO.

  2. Modify paths in coco_poly_detection.yml

    train_dataloader:
        img_folder: /data/COCO2017/train2017/
        ann_file: /data/COCO2017/annotations/instances_train2017.json
    val_dataloader:
        img_folder: /data/COCO2017/val2017/
        ann_file: /data/COCO2017/annotations/instances_val2017.json
SBD Dataset
  1. Download COCO format SBD Dataset from here.

  2. Modify paths in sbd_poly_detection.yml

    train_dataloader:
        img_folder: /data/sbd/img/
        ann_file: /data/sbd/annotations/sbd_train_instance.json
    val_dataloader:
        img_folder: /data/sbd/img/
        ann_file: /data/sbd/annotations/sbd_trainval_instance.json
KINS dataset
  1. Download the Kitti dataset from the official website.

  2. Download the annotation file instances_train.json and instances_val.json from KINS.

  3. Organize the dataset as the following structure:

    ├── /path/to/kitti
    │   ├── testing
    │   │   ├── image_2
    │   │   ├── instances_val.json
    │   ├── training
    │   │   ├── image_2
    │   │   ├── instances_train.json
    
  4. Modify paths in kins_poly_detection.yml

    train_dataloader:
        img_folder: /data/kins_dataset/training/image_2/
        ann_file: /data/kins_dataset/training/instances_train.json
    val_dataloader:
        img_folder: /data/kins_dataset/testing/image_2/
        ann_file: /data/kins_dataset/testing/instances_val.json

Demo

 python draw.py -c configs/contourformer/contourformer_hgnetv2_b3_sbd.yml -r weight/contourformer_hgnetv2_b3_sbd.pth -i your_image.jpg

Usage

COCO2017
  1. Training
CUDA_VISIBLE_DEVICES=0,1,2,3 torchrun --master_port=7777 --nproc_per_node=4 train.py -c configs/contourformer/contourformer_hgnetv2_b2_coco.yml --seed=0
CUDA_VISIBLE_DEVICES=0,1,2,3 torchrun --master_port=7777 --nproc_per_node=4 train.py -c configs/contourformer/contourformer_hgnetv2_b3_coco.yml --seed=0
  1. Testing
CUDA_VISIBLE_DEVICES=0,1,2,3 torchrun --master_port=7777 --nproc_per_node=4 train.py -c configs/contourformer/contourformer_hgnetv2_b2_coco.yml --test-only -r model.pth
CUDA_VISIBLE_DEVICES=0,1,2,3 torchrun --master_port=7777 --nproc_per_node=4 train.py -c configs/contourformer/contourformer_hgnetv2_b3_coco.yml --test-only -r model.pth
SBD
  1. Training
CUDA_VISIBLE_DEVICES=0,1,2,3 torchrun --master_port=7777 --nproc_per_node=4 train.py -c configs/contourformer/contourformer_hgnetv2_b2_sbd.yml --seed=0
CUDA_VISIBLE_DEVICES=0,1,2,3 torchrun --master_port=7777 --nproc_per_node=4 train.py -c configs/contourformer/contourformer_hgnetv2_b3_sbd.yml --seed=0
  1. Testing
CUDA_VISIBLE_DEVICES=0,1,2,3 torchrun --master_port=7777 --nproc_per_node=4 train.py -c configs/contourformer/contourformer_hgnetv2_b2_sbd.yml --test-only -r model.pth
CUDA_VISIBLE_DEVICES=0,1,2,3 torchrun --master_port=7777 --nproc_per_node=4 train.py -c configs/contourformer/contourformer_hgnetv2_b3_sbd.yml --test-only -r model.pth
KINS
  1. Training
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 torchrun --master_port=7777 --nproc_per_node=8 train.py -c configs/contourformer/contourformer_hgnetv2_b2_kins.yml --seed=0
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 torchrun --master_port=7777 --nproc_per_node=8 train.py -c configs/contourformer/contourformer_hgnetv2_b3_kins.yml --seed=0
  1. Testing
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 torchrun --master_port=7777 --nproc_per_node=8 train.py -c configs/contourformer/contourformer_hgnetv2_b2_kins.yml --test-only -r model.pth
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 torchrun --master_port=7777 --nproc_per_node=8 train.py -c configs/contourformer/contourformer_hgnetv2_b3_kins.yml --test-only -r model.pth

Citation

If you use ContourFormer or its methods in your work, please cite the following BibTeX entries:

bibtex
@misc{yao2025contourformerrealtimecontourbasedendtoendinstance,
      title={ContourFormer:Real-Time Contour-Based End-to-End Instance Segmentation Transformer}, 
      author={Weiwei Yao and Chen Li and Minjun Xiong and Wenbo Dong and Hao Chen and Xiong Xiao},
      year={2025},
      eprint={2501.17688},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2501.17688}, 
}

Acknowledgement

Our work is built upon D-FINE. Thanks to the inspirations from D-FINE and PolySnake.

✨ Feel free to contribute and reach out if you have any questions! ✨

About

ContourFormer:Real-Time Contour-Based End-to-End Instance Segmentation Transformer

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published