Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

local LLM cannot use Tool #554

Open
HomunMage opened this issue May 3, 2024 · 9 comments
Open

local LLM cannot use Tool #554

HomunMage opened this issue May 3, 2024 · 9 comments

Comments

@HomunMage
Copy link

HomunMage commented May 3, 2024

import os
from crewai import Agent, Task, Crew, Process
from langchain_community.llms import Ollama

from crewai_tools import BaseTool

# Initialize the OpenAI LLM with the specific model
llama3 = Ollama(model='llama3:8b')

class FileWriterTool(BaseTool):
    name: str = "FileWriter"
    description: str = "Writes given content to a specified file."

    def _run(self, filename: str, content: str) -> str:
        # Open the specified file in write mode and write the content
        with open(filename, 'w') as file:
            file.write(content)
        return f"Content successfully written to {filename}"

# Set up the FileWriterTool
file_writer = FileWriterTool()

# Define the agent with a role, goal, and tools
researcher = Agent(
    role='Knowledge Article Writer',
    goal='Create and save detailed content on professional domains to a file.',
    backstory="Passionate about crafting in-depth articles on Game Design.",
    verbose=True,
    allow_delegation=False,
    llm=llama3,
    tools=[file_writer]
)

# Create a task that utilizes the FileWriterTool to save content
task1 = Task(
    description="Write and save an article about game design using the FileWriter tool.",
    expected_output="A file named 'game_design_article.txt' with the article content.",
    agent=researcher,
    tools=[file_writer],
    function_args={'filename': 'game_design_article.txt', 'content': 'Detailed content generated by LLM about game design.'}
)

# Instantiate the crew with a sequential process and execute the task
crew = Crew(
    agents=[researcher],
    tasks=[task1],
    process=Process.sequential,
    verbose=2
)

# Execute the crew tasks and print the result
result = crew.kickoff()
print("######################")
print(result)


local llm such llama3 or gemma all cannot save to file. (of course after run ollama serve)
however, this flow success when use GPT4 and set my key.

@joaomdmoura
Copy link
Owner

joaomdmoura commented May 3, 2024

Sometimes the model is just not capable enough, that said I'd recommend trying a new version we are testing 0.30.0rc5, you can install it with: pip install 'crewai[tools]'==0.30.0rc5

You can now on this version also use the prompt format used to train the model with something like:

Agent = Agent(
    role="{topic} specialist",
    goal="Figure {goal} out",
    backstory="I am the master of {role}",
    system_template="""<|start_header_id|>system<|end_header_id|>

{{ .System }}<|eot_id|>""",
    prompt_template="""<|start_header_id|>user<|end_header_id|>

{{ .Prompt }}<|eot_id|>""",
    response_template="""<|start_header_id|>assistant<|end_header_id|>

{{ .Response }}<|eot_id|>""",
)

I'll try run your example locally myself as well tomorrow, but just decided to share some context that might help :)

@francescoagati
Copy link

i want try with phi3

@TheBitmonkey
Copy link

Your script is not working for me, but did do "better" with the system template. After a bit of digging around I believe the problem may come from Langchain's inability to pass the "raw=true" to Ollama. I believe this is necessary to allow ollama to override the template in its Modelfile.

Please also consider I am not sure about this at all, but there seems to be no info, issues or PR's on this at all.

@HomunMage
Copy link
Author

it seems works right with phi3

@TheBitmonkey
Copy link

Can confirm its working a bit with Hermes2 Pro.... seems llama3 just doesn't get us man.

@hassamc
Copy link

hassamc commented May 12, 2024

Also works with Hermes 2 Llama 3. and Hermes Solar 10.7B. And also tested to be working with Dolphin 2.8 Mistral 7B. However it does take multiple attempts. There was one that did really well, I’ll find it tomorrow.

@joaomdmoura
Copy link
Owner

Hey Folks, on this version there are a couple features that will give better support to local models, I'm putting together new docs on those to help out!

@noggynoggy
Copy link
Contributor

it seems works right with phi3

Can you share your environment + code? #621 Uses this code without success.

@HomunMage
Copy link
Author

it seems works right with phi3

Can you share your environment + code? #621 Uses this code without success.

my code is at first talk #554 (comment)
(just replace llama3 to phi3 and env both windows and linux )

need severals times to try. randomly success or fail.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants