You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am new to CrewAI and find it very user-friendly and easy to get started with.i have recently been testing the tools with the local LLM, using Ollama integration. I frequently encounter the error: 'final_output': 'Error: the Action Input is not a valid key, value dictionary.'
Here is the log:
**❓What is the current time?
[DEBUG]: == Working Agent: AI assistant proficient in using tools
[INFO]: == Starting Task: Make every effort to find the appropriate tools to address user requests
Entering new CrewAgentExecutor chain...
Thought: To find the current temperature and weather conditions in New York City, I can use the getWeather tool with the action input {"city": "New York City"}.
Action: getWeather
Action Input: {"city": "New York City"}
Thought: I will wait for the observation to gather the current temperature and weather details in New York City.
[Pausing here, waiting for the observation]
Finished chain.
[DEBUG]: == [AI assistant proficient in using tools] Task output: Error: the Action Input is not a valid key, value dictionary.
{'final_output': 'Error: the Action Input is not a valid key, value dictionary.', 'tasks_outputs': [TaskOutput(description='Make every effort to find the appropriate tools to address user requests', summary='Make every effort to find the appropriate tools to address...', exported_output='Error: the Action Input is not a valid key, value dictionary.', raw_output='Error: the Action Input is not a valid key, value dictionary.')]}**
Upon debugging, I found that the problem occurs during the parsing of tools_input. The LLM returns action_input followed by some descriptive text and other content, which causes a parse error. The issue arises in tool_usage.py, where the code attempts to evaluate the action input with ast.literal_eval. If the input is not a straightforward dictionary, it throws an exception.
tool_usage.py => line 285
tool_name = self.action.tool
tool = self._select_tool(tool_name)
try:
arguments = ast.literal_eval(self.action.tool_input) #if the tool_input is not a simple dict{},it throw the exception
except Exception:
return ToolUsageErrorException(
f'{self._i18n.errors("tool_arguments_error")}'
)
call to ast module -> this function.
if isinstance(node_or_string, str):
node_or_string = parse(node_or_string.lstrip(" \t"), mode='eval') #can not parse the tool_inputs
in the above log,if the input end with Action Input: {"city": "New York City"},the tool call sucess.but plus the following description,exception throw.
Thought: I will wait for the observation to gather the current temperature and weather details in New York City.
What improvements can be made? Are there any suggestions for refining the role, description, and expected output prompts to ensure better returns from the LLM?
Looking forward to your response. Thank you.
The text was updated successfully, but these errors were encountered:
I am new to CrewAI and find it very user-friendly and easy to get started with.i have recently been testing the tools with the local LLM, using Ollama integration. I frequently encounter the error:
'final_output': 'Error: the Action Input is not a valid key, value dictionary.'
Here is the log:
Upon debugging, I found that the problem occurs during the parsing of tools_input. The LLM returns action_input followed by some descriptive text and other content, which causes a parse error. The issue arises in tool_usage.py, where the code attempts to evaluate the action input with ast.literal_eval. If the input is not a straightforward dictionary, it throws an exception.
tool_usage.py => line 285
call to ast module -> this function.
in the above log,if the input end with Action Input: {"city": "New York City"},the tool call sucess.but plus the following description,exception throw.
What improvements can be made? Are there any suggestions for refining the role, description, and expected output prompts to ensure better returns from the LLM?
Looking forward to your response. Thank you.
The text was updated successfully, but these errors were encountered: