Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

how to finetune with gemma model? #8

Open
runningabcd opened this issue Feb 22, 2024 · 8 comments
Open

how to finetune with gemma model? #8

runningabcd opened this issue Feb 22, 2024 · 8 comments
Labels
stat:awaiting response Status - Awaiting response from author type:support Support issues

Comments

@runningabcd
Copy link

how to finetune with gemma model?

@runningabcd
Copy link
Author

I had download gemma-7b-it model from hugging face already, but not find the script that can finetune wiht my own data

@runningabcd
Copy link
Author

help

@runningabcd
Copy link
Author

I had download gemma-7b-it model from hugging face already, but not find the script that can finetune wiht my own data

How to sft with gemma?Can you tell me the sft data format?

@runningabcd
Copy link
Author

I had download gemma-7b-it model from hugging face already, but not find the script that can finetune wiht my own data

How to sft with gemma?Can you tell me the sft data format?

@pengchongjin

@pengchongjin
Copy link
Collaborator

pengchongjin commented Feb 22, 2024

Hi there, unfortunately, this repo doesn't provide finetuning features.

Here are a few alternatives that might fit your needs:

  1. On Gemma model card in Vertex Model Garden, there are a few notebooks which demonstrate how to do finetuning and then deploy to Vertex endpoints.
  2. On Gemma model card in Kaggle, there are a few notebooks which uses KerasNLP to do finetuning.
  3. HuggingFace demonstrates how to use TRL to do finetuning in this blog post.

Hope it helps.

@r-gheda
Copy link
Contributor

r-gheda commented Feb 24, 2024

@pengchongjin Is it possible to implement a class for fine tuning the model inside this repo similar to what done with llama-recipes?

@aliasneo1
Copy link

Are there any tutorials for fine-tuning 7b-it-quant model ?

@tilakrayal tilakrayal added the type:support Support issues label Apr 24, 2024
@selamw1
Copy link

selamw1 commented Apr 24, 2024

Hi @aliasneo1

There are a few tutorials that demonstrate fine-tuning the gemma-2b model. You can follow similar procedures to fine-tune the Gemma variant gemma-7b-it.

Here are some resources:

@tilakrayal tilakrayal added the stat:awaiting response Status - Awaiting response from author label Apr 25, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
stat:awaiting response Status - Awaiting response from author type:support Support issues
Projects
None yet
Development

No branches or pull requests

6 participants