Skip to content

reproduction of semantic segmentation using masked autoencoder (mae)

Notifications You must be signed in to change notification settings

implus/mae_segmentation

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

ADE20k Semantic segmentation with MAE

Getting started

  1. Install the mmsegmentation library and some required packages.
pip install mmcv-full==1.3.0 mmsegmentation==0.11.0
pip install scipy timm==0.3.2
  1. Install apex for mixed-precision training
git clone https://github.com/NVIDIA/apex
cd apex
pip install -v --disable-pip-version-check --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" ./
  1. Follow the guide in mmseg to prepare the ADE20k dataset.

Fine-tuning for Reproducing Results of MAE ViT-Base

Command:

tools/dist_train.sh configs/mae/upernet_mae_base_12_512_slide_160k_ade20k.py 8 --seed 0  --options model.pretrained=https://dl.fbaipublicfiles.com/mae/pretrain/mae_pretrain_vit_base.pth

Expected results log(paper results: 48.1 mIoU):

+--------+-------+-------+-------+
| Scope  | mIoU  | mAcc  | aAcc  |
+--------+-------+-------+-------+
| global | 48.15 | 58.99 | 83.05 |
+--------+-------+-------+-------+

Evaluation

Command format:

tools/dist_test.sh  <CONFIG_PATH> <CHECKPOINT_PATH> <NUM_GPUS> --eval mIoU

Acknowledgment

This code is built using the mmsegmentation library, Timm library, the Swin repository, XCiT, SETR, BEiT and the MAE repository.

Releases

No releases published

Packages

No packages published