You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I find that for some complex examples, forcing a consult with the vector store for each response is worth the latency to ensure each reply is contextful. I think the least impressive bit about the vector store setup is the model's ability to decide when to use it (though yet to test gpt4o much)
Additional context
No response
The text was updated successfully, but these errors were encountered:
First check
Describe the current behavior
Apologies if I'm doing this wrong, but I've consulted the documentation and github, and can't find a way to do this so far:
When creating a
Run
withthread.run(assistant=ai)
I'd like to force the assistant to use a specific tool, in this caseFileSearchTool
.With the open ai api, that's accomplished with the
tool_choice
parameter to the create run endpoint: (I think as of April 17th).You would pass
{"type": "file_search"}
as its value. Which I think you'd expect to do like this?But I get a validation error trying that:
Describe the proposed behavior
If that's not already a feature that I'm missing, then that would be the proposal to force tool use:
Example Use
I find that for some complex examples, forcing a consult with the vector store for each response is worth the latency to ensure each reply is contextful. I think the least impressive bit about the vector store setup is the model's ability to decide when to use it (though yet to test
gpt4o
much)Additional context
No response
The text was updated successfully, but these errors were encountered: