Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Roadmap #3

Open
3 of 14 tasks
ctlllll opened this issue Sep 12, 2023 · 15 comments
Open
3 of 14 tasks

Roadmap #3

ctlllll opened this issue Sep 12, 2023 · 15 comments
Labels
documentation Improvements or additions to documentation

Comments

@ctlllll
Copy link
Contributor

ctlllll commented Sep 12, 2023

Roadmap

Functionality

Integration

Local Deployment

Serving

Research

@ctlllll ctlllll added the documentation Improvements or additions to documentation label Sep 12, 2023
@ctlllll ctlllll pinned this issue Sep 12, 2023
@JianbangZ
Copy link

Looks like a promising roadmap. I think llama.cpp support should be held a higher priority

@Kimiko-AI
Copy link

Agree, that faster t/s is really important for llamacpp users.

@yhyu13
Copy link

yhyu13 commented Sep 13, 2023

Would love to you Medusa be as a plugin of ooba's textgen webui for medusa head models

@yhyu13
Copy link

yhyu13 commented Sep 13, 2023

Would Medusa compatible with GPTQ quantized models?

Specifically, two Medusa heads fine-tuned on unquantized and quantized model, would they be the same? Or can they be swapped?

@ctlllll
Copy link
Contributor Author

ctlllll commented Sep 13, 2023

Would Medusa compatible with GPTQ quantized models?

Specifically, two Medusa heads fine-tuned on unquantized and quantized model, would they be the same? Or can they be swapped?

We didn't try this, but we can make an analogy to the 33B model we trained with bitsandbytes's 8-bit quantized base model, where the difference seems to be minor. Yet, more investigation is needed :)

@ctlllll ctlllll mentioned this issue Sep 17, 2023
Closed
@AIApprentice101
Copy link

Please consider supporting quantized models, like GPTQ, AWQ, etc

@ctlllll
Copy link
Contributor Author

ctlllll commented Sep 18, 2023

Please consider supporting quantized models, like GPTQ, AWQ, etc

Thanks for the suggestion. Those models should be easily integrated just by loading the base model in those formats. We are trying to integrate Medusa into frameworks that the speed actually benefits from quantization, e.g., mlc-llm, llama.cpp.

@JianbangZ
Copy link

Please consider supporting quantized models, like GPTQ, AWQ, etc

Thanks for the suggestion. Those models should be easily integrated just by loading the base model in those formats. We are trying to integrate Medusa into frameworks that the speed actually benefits from quantization, e.g., mlc-llm, llama.cpp.

Exciting. Is there a timeline for llama.cpp support? your best guess?

@ctlllll
Copy link
Contributor Author

ctlllll commented Sep 18, 2023

Please consider supporting quantized models, like GPTQ, AWQ, etc

Thanks for the suggestion. Those models should be easily integrated just by loading the base model in those formats. We are trying to integrate Medusa into frameworks that the speed actually benefits from quantization, e.g., mlc-llm, llama.cpp.

Exciting. Is there a timeline for llama.cpp support? your best guess?

We'll start with MLC-LLM first as it's more user-friendly for integration. For llama.cpp, we currently don't have the bandwidth to do it and it would be greatly appreciated if there were volunteers who could help us with it :)

@ctlllll
Copy link
Contributor Author

ctlllll commented Sep 18, 2023

🎉 Exciting News! 🎉

We are thrilled to announce that we have received an award from Chai Research! While the monetary value may not be substantial, we are dedicating it as a token of our appreciation for the invaluable contributions made by our community. The funds will be allocated as development bounties to incentivize the achievement of key milestones.

🏆 First Bounty: Porting Medusa to Llama.cpp #35 🏆
Bounty Amount: $100

@feifeibear
Copy link

Hello @ctlllll , Thanks for providing such a wonderful project. I am interested in the part of Fine-grained KV cache management. Could you offer me more guidance on this?

I have been working on a demo for SpeculativeSampling for a while.

https://github.com/feifeibear/LLMSpeculativeSampling

@ctlllll
Copy link
Contributor Author

ctlllll commented Sep 20, 2023

Hello @ctlllll , Thanks for providing such a wonderful project. I am interested in the part of Fine-grained KV cache management. Could you offer me more guidance on this?

I have been working on a demo for SpeculativeSampling for a while.

https://github.com/feifeibear/LLMSpeculativeSampling

Hi @feifeibear , thanks for your interest! In the current version, we implemented a pre-allocated KV cache with the philosophy of keeping the original HF APIs and only for reducing the memory movement cost when updating KV cache. I think to be more dynamic, the PagedAttention mechanism in vllm might be a better reference :)

@nikshepsvn
Copy link

Hey all, any updates on this?

@ctlllll
Copy link
Contributor Author

ctlllll commented Nov 21, 2023

Hey all, any updates on this?

We have some exciting stuff baking now. Let's wait and see :p

@nivibilla
Copy link

Hi, could sglang be placed on the roadmap too? It's a recent release also from lmsys who made vllm. But it's faster.

https://github.com/sgl-project/sglang

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation
Projects
None yet
Development

No branches or pull requests

8 participants