Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

initial commit of the OpenAI Agent POC #629

Open
wants to merge 41 commits into
base: main
Choose a base branch
from

Conversation

birdperson1970
Copy link

This is still rought and needs a new UI field to allow the user to set their own assitant_id if you comment out:

self.assistant_id='thread_lCb2rKUzIOodcA3MZaZu61Bv'

It will create a basic assitant each time which will be really annoying to cleanup....

Copy link

netlify bot commented Nov 16, 2023

Deploy Preview for continuedev canceled.

Name Link
🔨 Latest commit 9cf002d
🔍 Latest deploy log https://app.netlify.com/sites/continuedev/deploys/657458006723220008edcbbd




async def _stream_chat(self, messages: List[ChatMessage], options):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks like the really important part!

There's a high-level thing that I want to point out right away. It's critical to make sure that every sub-class of LLM has the same interface, and it would probably require changing something here to give it access to the SDK to call tools.

So the first thing I might think is that you could find a way to keep a lot of this logic inside of the OpenAIAgent class and just call the SDK from inside of a step. But the question then becomes: how do we pass the output of the tools back into the assistant?

It might actually be possible to do the following in a Step: [call stream_chat] -> [get response] -> [call tool based on response (in the Step, not OpenAIAgent)] -> [pass results of tool back as the input to another stream_chat call] -> [repeat].

The important thing is to make sure that no "state" is stored in the LLM class, so that it can be called multiple times without worrying about what previous calls did. I think the key to this is creating a new thread for each stream_chat call, and making separate stream_chat calls whenever a tool is invoked

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I understand that the LLMs needs to be reentrant. OpenAI Agent conversational Threads are designed to capture the entire context with optimisation taking place once the it approaches the 128k context window.

Would you object to the following changes:
1. A thread_id UUID field is added to SessionState - Given that the sessionState.title(ide) is editable a new persistent identifier would seem appropriate. This would give a key to the SessionState and server you well in the long term.
2. Map the SessionState.thread_id to a OpenAI.thread.id - If you could point me in the right direction on how to do this in a persistent manner …
3. Add thread_id to the CompletionOptions for the call to stream_chat.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants