New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to instruct the model for getting proper key value pair as json format, without getting any other text. #154
Comments
If you specify the "format" and set it to "json" you will have your desired results. |
llama3 8b instruct model, how to use this format params, can you share? Need a example or prompt related documentation. |
Here is an example code pipeline = transformers.pipeline( messages = [ prompt = pipeline.tokenizer.apply_chat_template( terminators = [ outputs = pipeline( ) |
Thanks a ton sir! I will check this. |
Same prompt and same ocr text from image. Is there any options for this, I understand this is a llm. Can you suggest some ideas for prompt to extract key value pairs in a paragraph. |
Getting same result as before inspite of using prompt = pipeline.tokenizer.apply_chat_template( |
I need to get json results from the paragraph contains key value pairs, but llam3 instruct model return json format with some unwanted string, how to get proper answer from llama3 model.
or
Anyother options in coding or a parameter available to get that result.
The text was updated successfully, but these errors were encountered: