Skip to content

Real time Instance Segmentation with detectron2.It uses SoloV2,Centermask,CondInst & MaskRCNN on cusom Balloon dataset

Notifications You must be signed in to change notification settings

satya15july/instance_segmentation

Repository files navigation

Real Time Instance Segmentation using Detectron2 & Adelaidet

out_cityscapes_tensormask.mp4
out_cityscapes_centermask.mp4
balloon_centermask.mp4

Overview:

Different Instance segmentation architectures as follows:

insta_timeline(1)

Here we try to solve instance segmentation on Balloon Dataset and Cityscapes Dataset using architectures,which are faster and designed to run on edge devices, such as:

  • Centermask.
  • CondInst.
  • SoloV2.
  • MaskRCNN(Not a fast architecture,but used as the benchmark for others)

Dependency:

  • Detectron2: Install Detectron2 by following the instruction present.

  • AdelaiDet : This is written on top of Detectron2.Instance Architecture such as CondInsta, Solov2,Blendmask etc. are part of it.Install AdelaiDet by following
    instruction given in the website.Clone this repo in to the root folder of this project.

    (Note:You may face some CUDA related errors while installing this package which you need to fix.I faced this with my RTX-2080Ti Graphics Card)

    Please apply the patch AdelaiDet_CUDA_fix.patch present in this repo.

  • CenterMask2: I modified this implementation in order to make this work in AdelaiDet with other architectures.

    Please apply the patch CenterMask2_modi.patch once you download CenterMask2.

Here are some modifications done in order to make different architectures work for instance segmentation task.

detectron2

Training:

With Balloon Dataset:

  • Download the balloon dataset and convert that in to coco format.

For training,execute the below command:

python3.8 training_balloon.py --arch <architectures> --path <model_out> --epochs <> model<pretrained-weight> --resume<0/1>

--arch = The architecture currently supported are ['maskrcnn', 'centermask_mv2', 'centermask_v19_slimdw', 'solov2', 'condinst'].

--path = Provide the path where model trained on Balloon dataset can be saved.

--epochs = Provide the number of epochs.

--model = This option is used when you want to resume the training.For example, initially you trained your model for 10000 epochs & the model is saved in to 'savedModels' folder.After that you to resume the training from 10000 and in that case you need to pass the model weights which was saved for 10000 epochs inside savedModels folder.

--resume = use 0 or 1 for resuming the training process.(Note: you have to provide the previous model weight against --model option)

For example, python3.8 training_balloon.py --arch centermask_mv2 --path model_out --epochs 10000.

For resuming training, python3.8 training_balloon.py --arch centermask_mv2 --path model_out --epochs 10000 --model model_out/centermask_mv2/final_model.pth --epochs 20000 --resume 1

With Cityscapes Dataset:

  • Register with Cityscapes dataset.This might take some days.

  • Once you get the approval.Download the dataset in to your local path.

  • export DETECTRON2_DATASETS=PATH.

    For example, export DETECTRON2_DATASETS=/media/satya/work/project/segmentation/datasets.Add this to your ~/.bash.rc.

    The dataset structure should be: datasets/cityscapes/{leftImg8bit,gtFine}

  • Then run the cityscapes script as mentioned in https://detectron2.readthedocs.io/en/latest/tutorials/builtin_datasets.html

    (Note: I am writing an article on how to use detectron2.I will publish this soon. Please subscribe to my Medium Blog for the future update)

For training, execute the below command:

python3.8 training_cityscapes.py --arch <architectures> --path <model_out> --epochs <> model<pretrained-weight> --resume<0/1>

Inference:

With Balloon Dataset:

For inference, execute the below command:

python3.8 inference_balloon.py --arch <> --model <> --target<cpu/cuda> --source <image/webcam/video_input> --save <0/1>

--arch = Choose from ['maskrcnn', 'centermask_mv2', 'centermask_v19_slimdw', 'solov2', 'condinst'].

--model = Provide the path where trained model is present for the respective architecture.

--target = Choose the target device['cpu', 'cuda'].

--source= You can do the inference on image, webcam and video input.

--save = 0/1.This is valid when the inference type is 'image'.This allows you to show the segmented output on the display rather than saving the output in to "output_images" folder.

With Cityscapes Dataset:

For inference, execute the below command:

python3.8 inference_cityscapes.py --arch <> --model <> --target<cpu/cuda> --source <image/webcam/video_input> --save <0/1>

Evaluation:

Here is the evaluation data measured with different architectures on CPU and GPU configuration.

instance_segmentation_inference_time


Reach me @

LinkedIn GitHub Medium

About

Real time Instance Segmentation with detectron2.It uses SoloV2,Centermask,CondInst & MaskRCNN on cusom Balloon dataset

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages