New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
local LLM cannot use Tool #554
Comments
Sometimes the model is just not capable enough, that said I'd recommend trying a new version we are testing You can now on this version also use the prompt format used to train the model with something like: Agent = Agent(
role="{topic} specialist",
goal="Figure {goal} out",
backstory="I am the master of {role}",
system_template="""<|start_header_id|>system<|end_header_id|>
{{ .System }}<|eot_id|>""",
prompt_template="""<|start_header_id|>user<|end_header_id|>
{{ .Prompt }}<|eot_id|>""",
response_template="""<|start_header_id|>assistant<|end_header_id|>
{{ .Response }}<|eot_id|>""",
) I'll try run your example locally myself as well tomorrow, but just decided to share some context that might help :) |
i want try with phi3 |
Your script is not working for me, but did do "better" with the system template. After a bit of digging around I believe the problem may come from Langchain's inability to pass the "raw=true" to Ollama. I believe this is necessary to allow ollama to override the template in its Modelfile. Please also consider I am not sure about this at all, but there seems to be no info, issues or PR's on this at all. |
it seems works right with phi3 |
Can confirm its working a bit with Hermes2 Pro.... seems llama3 just doesn't get us man. |
Also works with Hermes 2 Llama 3. and Hermes Solar 10.7B. And also tested to be working with Dolphin 2.8 Mistral 7B. However it does take multiple attempts. There was one that did really well, I’ll find it tomorrow. |
Hey Folks, on this version there are a couple features that will give better support to local models, I'm putting together new docs on those to help out! |
Can you share your environment + code? #621 Uses this code without success. |
my code is at first talk #554 (comment) need severals times to try. randomly success or fail. |
local llm such llama3 or gemma all cannot save to file. (of course after run ollama serve)
however, this flow success when use GPT4 and set my key.
The text was updated successfully, but these errors were encountered: