-
Notifications
You must be signed in to change notification settings - Fork 47
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to verify whether the training of VAE is good? #16
Comments
Hi LinghaoChan, If you use the humanml3d dataset, the normal range for diffusion (text-to-motion task) is around [0.45, 1.0]. It should be [0.2, 0.4] for VAE. For the visualization, both vae and diffusion stage can use the same visualization scripts. You can refer to "Details of training" of FAQ (github readme) and issue #5 and #9 for more details. P.S.
|
Hi, if your VAE results are not correct, please pay attention to this issue #18. We have fixed the bug on KL loss.
|
fine, thx. |
Hi, here. I notice the LAMBDA_KL=0.0001. It is much smaller than other LAMBDAs. Does it really work in training VAE? I train the model w/ and w/o it. It seems both results are good. |
It is quite important for the second stage (diffusion stage). KL can regularize the latent distribution, thus making the latent space meaningful. If you refer to other papers, the weight of KL loss is usually set to a small value, like 1e-3, 1e-4, 1e-5. |
How to verify whether the training of VAE is good? Have you provided any code for the visualization of VAE training?
The text was updated successfully, but these errors were encountered: