Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enabling device_map="auto" for Video-LLaVA #30858

Closed
darshana1406 opened this issue May 16, 2024 · 1 comment · Fixed by #30870
Closed

Enabling device_map="auto" for Video-LLaVA #30858

darshana1406 opened this issue May 16, 2024 · 1 comment · Fixed by #30870
Labels
Feature request Request for a new feature Vision

Comments

@darshana1406
Copy link
Contributor

Feature request

To load model across multiple devices, _no_split_modules has to be defined in the model's pretrained model class e.g. like here for LLaVa.

Motivation

To enable inference of multiple lower end devices and avoid OOM.

Your contribution

@zucchini-nlp
I could try to work on this but I'm not sure about what modules should be included here.

@zucchini-nlp
Copy link
Member

Thank for opening an issue! Sure, adding a _no_split_modules in the same way as it is done for other LLaVa models will do the work. See here for an example

Let me know if you need any guidance 😄

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Feature request Request for a new feature Vision
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants