Skip to content

IEEE T-BIOM : "Audio-Visual Fusion for Emotion Recognition in the Valence-Arousal Space Using Joint Cross-Attention"

Notifications You must be signed in to change notification settings

praveena2j/Joint-Cross-Attention-for-Audio-Visual-Fusion

Repository files navigation

Audio–Visual Fusion for Emotion Recognition in the Valence–Arousal Space Using Joint Cross-Attention

Code for our paper "Audio–Visual Fusion for Emotion Recognition in the Valence–Arousal Space Using Joint Cross-Attention" accepted to IEEE T-BIOM 2023. Our paper can be found here.

Citation

If you find this code useful for your research, please cite our paper.

@INPROCEEDINGS{10095234,
  author={Praveen, R Gnana and Cardinal, Patrick and Granger, Eric},
  journal={IEEE Transactions on Biometrics, Behavior, and Identity Science}, 
  title={Audio–Visual Fusion for Emotion Recognition in the Valence–Arousal Space Using Joint Cross-Attention}, 
  year={2023},
}

This code uses the Affwild2 dataset to validate the proposed approach for Dimensional Emotion Recognition. There are three major blocks in this repository to reproduce the results of our paper. This code uses Mixed Precision Training (torch.cuda.amp). The dependencies and packages required to reproduce the environment of this repository can be found in the environment.yml file.

Creating the environment

Create an environment using the environment.yml file

conda env create -f environment.yml

Models

The pre-trained models of audio and visual backbones are obtained here

The fusion models trained using our fusion approach can be found here

jointcam_model.pt:  Fusion model trained using our approach on the Affwild2 dataset

Table of contents

Preprocessing

Return to Table of Content

Step One: Download the dataset

Return to Table of Content Please download the following.

  • The dataset for the valence-arousal track can be downloaded here

Step Two: Preprocess the visual modality

Return to Table of Content

  • The cropped-aligned images are necessary. They are used to form the visual input. They are already provided by the dataset organizers. Otherwise, you may choose to use OpenFace toolkit to extract the cropped-aligned images. However, the per-frame success rate is lower compared to the database-provided version.

Step Three: Preprocess the audio modality

Return to Table of Content

  • The audio files are extracted and segmented to generate the corresponding audio files in alignment with the visual files using mkvextract. To generate these audio files, you can use the file Preprocessing/audio_preprocess.py.

Step Four: Preprocess the annotations

Return to Table of Content

  • The annotations provided by the dataset organizers are preprocessed to obtain the labels of aligned audio and visual files. To generate these audio files, you can use the file Preprocessing/preprocess_labels.py.

Training

Return to Table of Content

  • After obtaining the preprocessed audio and visual files along with annotations, we can train the model using the proposed fusion approach using the main.py script.

Inference

Return to Table of Content

  • The results of the proposed model can be reproduced using the trained model. In order to obtain the predictions on the test set using our proposed model, we can use the test.py.

👍 Acknowledgments

Our code is based on TSAV and Recursive-joint-co-attention

About

IEEE T-BIOM : "Audio-Visual Fusion for Emotion Recognition in the Valence-Arousal Space Using Joint Cross-Attention"

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published