This is an official implementation of the following paper:
Youngjoon Lee, Sangwoo Park, and Joonhyuk Kang. Security-Preserving Federated Learning via Byzantine-Sensitive Triplet Distance
arXiv preprint arXiv:2210.16519.
The implementation runs on
bash docker.sh
Additionally, please install the required packages as below
pip install tensorboard medmnist
This paper considers the following poisoning attacks
- Targeted model poisoning (Bhagoji, Arjun Nitin, et al. ICML 2019): Targeted model poisoning attack for federated learning
- MPAF (Xiaoyu Cao, Neil Zhenqiang Gong. CVPR Workshop 2022): Untargeted model poisoning attack for federated learning
This paper considers the following Byzantine-Robust aggregation techniques
- Vanilla (McMahan, Brendan, et al. AISTATS 2017)
- Krum (Blanchard, Peva, et al. NIPS 2017)
- Trimmed-mean (Yin, Dong, et al. ICML 2018)
- Fang (Fang, Minghong, et al. USENIX 2020)
- Blood cell classification dataset (Andrea Acevedo, Anna Merino, et al. Data in Brief 2020)
Without Byzantine attacks experiment runs on
bash execute/run0.sh
Impact of Byzantine percentage runs on
bash execute/run1.sh
Impact of non-iid degree runs on
bash execute/run2.sh