Skip to content

fostiropoulos/stream_benchmark

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

34 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Stream Benchmark

The code was used for the experiments and results of Batch-Model-Consolidation.

Paralleled multi-expert training framework

The repository contains a combination of the methods found in FACIL and Mammoth adapted to work with the AutoDS dataset where we evaluate the methods on a long sequence of tasks and in a distributed fashion.

Install

  1. Install the AutoDS dataset.
  2. git clone https://github.com/fostiropoulos/stream_benchmark.git
  3. cd stream_benchmark
  4. pip install .

AutoDS Feature Vectors Download

We use 71 datasets with extracted features from pre-trained models, supported in the AutoDS dataset. The detailed table.

Hyperparameters

Hyper-parameters are stored in hparams/defaults.json with the reported values in their papers. Modify the file for the number of n_epochs you want to train and the batch_size you want to use.

Run a single method

python -m stream_benchmark --save_path {save_path} --dataset_path {dataset_path} --model_name {model_name} --hparams hparams/defaults.json

We run the baselines on Stream with CLIP embeddings in this code. For model_name support see below.

Run multiple methods in distribution

Read more on Ray

  1. ray stop

  2. ray start --head

  3. python -m stream_benchmark.distributed --dataset_path {dataset_path} --num_gpus {num_gpus}

NOTE: {num_gpus} is the fractional number of GPU to use. Set this so that {GPU usage per experiment} * {num_gpus} < 1

Extending

The code in test_benchmark.py would be a good starting point in a simple example (ignoring the mock.patching) in understanding how the benchmark can be extended.

Citation

@inproceedings{fostiropoulos2023batch,
  title={Batch Model Consolidation: A Multi-Task Model Consolidation Framework},
  author={Fostiropoulos, Iordanis and Zhu, Jiaye and Itti, Laurent},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={3664--3676},
  year={2023}
}

Methods implemented

Description model_name File
Batch-Model-Consolidation bmc bmc.py
Continual learning via Gradient Episodic Memory. gem gem.py
Continual learning via online EWC. ewc_on ewc_on.py
Continual learning via MAS. mas mas.py
Continual learning via Experience Replay. er er.py
Continual learning via Deep Model Consolidation. dmc dmc.py
Continual learning via A-GEM, leveraging a reservoir buffer. agem_r agem_r.py
Continual Learning Through Synaptic Intelligence. si si.py
Continual learning via Function Distance Regularization. fdr fdr.py
Gradient based sample selection for online continual learning gss gss.py
Continual learning via Dark Experience Replay++. derpp derpp.py
Continual learning via A-GEM. agem agem.py
Stochastic gradient descent baseline without continual learning. sgd sgd.py
Continual learning via Learning without Forgetting. lwf lwf.py
Continual Learning via iCaRL. icarl icarl.py
Continual learning via Dark Experience Replay. der der.py
Continual learning via GDumb. gdumb gdumb.py
Continual learning via Experience Replay. er_ace er_ace.py
Continual learning via Hindsight Anchor Learning. hal hal.py
Joint training: a strong, simple baseline. joint_gcl joint_gcl.py