Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Prepare clip.cpp for upcoming llava.cpp #31

Open
monatis opened this issue Jul 5, 2023 · 1 comment
Open

Prepare clip.cpp for upcoming llava.cpp #31

monatis opened this issue Jul 5, 2023 · 1 comment
Assignees

Comments

@monatis
Copy link
Owner

monatis commented Jul 5, 2023

I'm still not 100% sure whether to call it llava.cpp or by another name to indicate its future support for other multimodal generation models in the future --maybe multimodal.cpp or lmm.cpp (large multimodal model). Open to suggestions by let's call it llava.cpp with a code name.

  • Update CMakeLists.txt with a flag CLIP_STANDALONE to toggle standalone mode. When ON, build against the ggml submodule. When OFF, build with ggml.h and ggml.c files directly included in llama.cpp.
  • Implement a function to get hidden states from a given layer index, to be used in llava.cpp.
  • Create another repo for llava.cpp. the llava.cpp repo should add both clip.cpp and llama.cpp repos as submodules and build with CLIP_STANDALONE=OFF to build against ggml sources included in llama.cpp.
@monatis monatis self-assigned this Jul 5, 2023
@fire
Copy link

fire commented Jul 5, 2023

Does multimodal_generative.cpp sound ok?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants