Skip to content

MotionDiffuseDCT: Investigating Dynamical Representations for Human Motion Generation

Notifications You must be signed in to change notification settings

wondmgezahu/MotionDiffuseDCT

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

14 Commits
 
 
 
 
 
 
 
 

Repository files navigation

MotionDiffuseDCT: Investigating Dynamical Representations for Human Motion Generation

This project is a variant of the MotionDiffuse framework, aiming to enhance diffusion-based human motion generation models. We propose an approach where diffusion is carried out in a dynamical space with lower dimensionality, utilizing the Discrete Cosine Transform (DCT). This allows us to leverage the efficiency of classical dynamical representations, reducing the computational burden associated with video processing applications.

Below is the content from the original project. For citations or any other references, please refer to their original README file below

Original Project

MotionDiffuse: Text-Driven Human Motion Generation with Diffusion Model

1S-Lab, Nanyang Technological University  2SenseTime Research 
*equal contribution  +corresponding author
play the guitar walk sadly walk happily check time

This repository contains the official implementation of MotionDiffuse: Text-Driven Human Motion Generation with Diffusion Model.


Updates

[10/2022] Add a 🤗Hugging Face Demo for text-driven motion generation!

[10/2022] Add a Colab Demo for text-driven motion generation! Open In Colab

[10/2022] Code release for text-driven motion generation!

[8/2022] Paper uploaded to arXiv. arXiv

Text-driven Motion Generation

You may refer to this file for detailed introduction.

Citation

If you find our work useful for your research, please consider citing the paper:

@article{zhang2022motiondiffuse,
  title={MotionDiffuse: Text-Driven Human Motion Generation with Diffusion Model},
  author={Zhang, Mingyuan and Cai, Zhongang and Pan, Liang and Hong, Fangzhou and Guo, Xinying and Yang, Lei and Liu, Ziwei},
  journal={arXiv preprint arXiv:2208.15001},
  year={2022}
}

Acknowledgements

This study is supported by NTU NAP, MOE AcRF Tier 2 (T2EP20221-0033), and under the RIE2020 Industry Alignment Fund – Industry Collaboration Projects (IAF-ICP) Funding Initiative, as well as cash and in-kind contribution from the industry partner(s).

About

MotionDiffuseDCT: Investigating Dynamical Representations for Human Motion Generation

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages