An optimized implementation of masked autoencoders (MAEs)
-
Updated
Jun 5, 2024 - Python
An optimized implementation of masked autoencoders (MAEs)
An optimized implementation of spatiotemporal masked autoencoders
Investigate possibilities for Vision Transformers with multiscale grids
TorchGeo: datasets, transforms, and models for geospatial data
Project for Computer Vision course @ MSc in Artificial Intelligence, UniVR
Change detection on satellite images with masked autoencoders.
Train MAE on Kaggle 2 GPUs (T4x2), Log to Wandb
Re-implementation of the method proposed in ''DreamDiffusion: Generating High-Quality Images from Brain EEG Signals'' by Y. Bai, X. Wang et al. for Neural Network Course exam Topics
The code for the paper "Contrastive Masked Autoencoders for Self-Supervised Video Hashing" (AAAI'23)
Reproducing the MET framework with PyTorch
PyTorch implementation of MADE
Generative modeling and representation learning through reconstruction
PyTorch wrapper for Deep Density Estimation Models
code for "AdPE: Adversarial Positional Embeddings for Pretraining Vision Transformers via MAE+"
Extraction of deep features/representation of birds by deep learning algorithms.
HSIMAE: A Unified Masked Autoencoder with large-scale pretraining for Hyperspectral Image Classification
Codebase for Imperial MSc AI Individual Project - Self-Supervised Learning for Audio Inference
Official code for CVPR2024 “VideoMAC: Video Masked Autoencoders Meet ConvNets”
A Vector Quantized Masked AutoEncoder for speech emotion recognition
Official implementation of Matrix Variational Masked Autoencoder (M-MAE) for ICML paper "Information Flow in Self-Supervised Learning" (https://arxiv.org/abs/2309.17281)
Add a description, image, and links to the masked-autoencoder topic page so that developers can more easily learn about it.
To associate your repository with the masked-autoencoder topic, visit your repo's landing page and select "manage topics."