Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Streaming Out OpenInterpreter's Output + Trigger (Y/N) Remotely #1259

Open
wants to merge 7 commits into
base: main
Choose a base branch
from
25 changes: 25 additions & 0 deletions docs/guides/streaming-output.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
---
title: Streaming Output
---

You can stream out all the output from Open Interpreter by adding `stream_output=function` in an `interpreter.chat()` call (optional).

You can also trigger OpenInterpreter's (Y/N) Confirmation Input REMOTELY by passing `async_input={"input":None, "code_revision":None}` changing the dict['input'] will trigger the (Y/N) confirmation, place your answer there

Additionally, you can pass New Code to dict['code_revision'] and it will be executed instead of the last codeblock (usefull for manual editing)

```python
## ↓ THIS FUNCTION IS CALLED ON ALL OI'S OUTPUTS
def stream_out_hook(partial, debug = False, *a, **kw):
''' THIS FUNCTION PROCESSES ALL THE OUTPUTS FROM OPEN INTERPRETER '''
if debug: print("STREAMING OUT! ",partial)
# Replace this function with one that will send the output to YOUR APPLICATION

## ↓ THIS IS OBJECT BEING WATCHED FOR TRIGGERING INPUT
async_input_data = {"input":None, "code_revision":None}
## ↑ CHANGING async_input_data["input"] WILL TRIGGER OI'S (Y/N/OTHER) CONFIRMATION INPUT

interpreter.chat(stream_out = stream_out_hook, async_input = async_input_data)
```

## For a more comprehensive & full example, please checkout [examples/stream_out.py](https://github.com/KillianLucas/open-interpreter/blob/main/docs/examples/stream_out.py)
72 changes: 72 additions & 0 deletions docs/language-models/hosted-models/groq.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,72 @@
---
title: Groq
---

To use Open Interpreter with a model from Groq, simply run:

<CodeGroup>

```bash Terminal
interpreter --model groq/llama3-8b-8192
```

```python Python
from interpreter import interpreter

interpreter.llm.model = "groq/llama3-8b-8192"
interpreter.llm.api_key = '<groq-api-key>'
interpreter.chat()
```

</CodeGroup>

If you are having any issues when passing the `--model`, try adding the `--api_base`:

<CodeGroup>

```bash Terminal
interpreter --api_base "https://api.groq.com/openai/v1" --model groq/llama3-8b-8192 --api_key $GROQ_API_KEY
```

```python Python
from interpreter import interpreter

interpreter.llm.model = "groq/llama3-8b-8192"
interpreter.llm.api_key = '<groq-api-key>'
interpreter.llm.api_base = "https://api.groq.com/openai/v1"
interpreter.llm.context_window = 32000
interpreter.chat()
```

</CodeGroup>

# Supported Models

We support any model on [Groq's models page:](https://console.groq.com/docs/models)

<CodeGroup>

```bash Terminal
interpreter --model groq/mixtral-8x7b-32768
interpreter --model groq/llama3-8b-8192
interpreter --model groq/llama3-70b-8192
interpreter --model groq/gemma-7b-it
```

```python Python
interpreter.llm.model = "groq/mixtral-8x7b-32768"
interpreter.llm.model = "groq/llama3-8b-8192"
interpreter.llm.model = "groq/llama3-70b-8192"
interpreter.llm.model = "groq/gemma-7b-it"
```

</CodeGroup>

# Required Environment Variables

Run `export GROQ_API_KEY='<your-key-here>'` or place it in your rc file and re-source
Set the following environment variables [(click here to learn how)](https://chat.openai.com/share/1062cdd8-62a1-4aa8-8ec9-eca45645971a) to use these models.

| Environment Variable | Description | Where to Find |
| -------------------- | ---------------------------------------------------- | ------------------------------------------------------------------- |
| `GROQ_API_KEY` | The API key for authenticating to Groq's services. | [Groq Account Page](https://console.groq.com/keys) |
195 changes: 195 additions & 0 deletions examples/stream_out.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,195 @@
from interpreter import interpreter, computer
import time
# ____ ____ __ __
# / __ \____ ___ ____ / _/___ / /____ _________ ________ / /____ _____
# / / / / __ \/ _ \/ __ \ / // __ \/ __/ _ \/ ___/ __ \/ ___/ _ \/ __/ _ \/ ___/
# / /_/ / /_/ / __/ / / / _/ // / / / /_/ __/ / / /_/ / / / __/ /_/ __/ /
# \____/ .___/\___/_/ /_/ /___/_/ /_/\__/\___/_/ / .___/_/ \___/\__/\___/_/
# /_/ /_/
# ____ _____ ____ _____ _ __ __ ___ _ _ _____
# / ___|_ _| _ \| ____| / \ | \/ | / _ \| | | |_ _|
# \___ \ | | | |_) | _| / _ \ | |\/| |_____| | | | | | | | |
# ___) || | | _ <| |___ / ___ \| | | |_____| |_| | |_| | | |
# |____/ |_| |_| \_\_____/_/ \_\_| |_| \___/ \___/ |_|

'''
# THIS EXAMPLE SHOWS HOW TO:
# 1. Stream-Out All of OpenInterpreter's outputs to another process (like another UI)
# 2. Async-Input To Trigger (Y/N/Other) Remotely
# 3. Make Changes to the Code Before Execution
# - If you answer Other than "y" or "n" your answer will be counted as a User Message
# - If you manually change the code, the new revised code will be run
'''


interpreter.llm.model = 'mixtral-8x7b-32768'
interpreter.llm.model = 'llama3-70b-8192'
interpreter.llm.api_key = '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'
interpreter.llm.api_base = "https://api.groq.com/openai/v1"
interpreter.llm.context_window = 32000


#______________________________________
# Data placeholders used to do async streaming out
from collections import deque

block_queue = deque()
full_queue = deque()
blocks_unfinished = deque()
pauseSend = [False]

# Useful for whatsapp and other messaging apps (set to True)
update_by_blocks = False
ignore_formats = ['active_line']
independent_blocks = ['confirmation', 'output'] # These will be sent as whole
#______________________________________


#______________________________________
# Prep for my implemintation
# from xo_benedict.freshClient import FreshClient #
# client = FreshClient(_inc=3)
#______________________________________

## ↓ EXAMPLE FOR THE FINAL METHOD TO STREAM OI'S OUTPUT TO ANOTHER PROGRAM
def _update(item, debug = False):
def _format(lines):
hSize = 4
return lines.replace("**Plan:**",f"<h{hSize}>Plan:</h{hSize}>").replace("**Code:**",f"<h{hSize}>Code:</h{hSize}>")

if not pauseSend[0]:
if debug: print(f"::: STREAMING OUT:", item)
stream_out = _format(str(item))

## ↓↓↓ SEND OUT OI'S OUTPUT

#client.addText(stream_out) # Just an example, my personal implemintation
if debug: print(" --CHANGE THIS-- STREAMING OUTPUT: ",stream_out)

## ↑↑↑ CHANGE THIS to something that triggers YOUR APPLICATION
## ↑ You can change this function to one that suites your application



# ↓ STREAN OUT HOOK - This function is passed into chat() and is called on every output from
def stream_out_hook(partial, debug = False, *a, **kw):
# Gets Chunks from OpenInterpreter and sends them to an async update queue
''' THIS FUNCTION PROCESSES ALL THE OUTPUTS FROM OPEN INTERPRETER
Prepares all the chunks to be sent out
update_by_blocks=True will batch similar messages, False will stream (unless in independent_blocks )
'''
if debug: print("STREAMING OUT! ",partial)

## ↓ Send all the different openinterpreter chunk types to the queue

if "start" in partial and partial["start"]:
if update_by_blocks:
blocks_unfinished.append({"content":"",**partial})#,"content_parts":[],**partial})
else:
full_queue.append(partial)
if partial['type'] in independent_blocks or 'format' in partial and partial['format'] in independent_blocks:
if update_by_blocks:
block_queue.append({"independent":True,**partial})
else:
full_queue.append({"independent":True,**partial})
if debug: print("INDEPENDENT BLOCK", partial)
elif 'content' in partial and ('format' not in partial or partial['format'] not in ignore_formats):
if update_by_blocks:
blocks_unfinished[0]['content'] += partial['content']
else:
full_queue.append(partial['content'])
# blocks[-1]['content_parts'].append(partial['content'])
if 'end' in partial:
if debug: print("EEEnd",blocks_unfinished, partial)
fin = {**partial}
if update_by_blocks:
blocks_unfinished[0]['end'] = partial['end']
fin = blocks_unfinished.popleft()
block_queue.append(fin)
else:
full_queue.append(fin)

if debug: print("FINISHED BLOCK", fin)




#______________________________________
# Continuesly Recieve OpenInterpreter Chunks and Prepare them to be Sent Out
def update_queue(debug = False, *a, **kw):
target = full_queue
if update_by_blocks:
target = block_queue
c = 0
while(True):
while(len(target) > 0):
leftmost_item = target.popleft()
if debug: print(f"{c} ::: UPDATING QUEUE:", leftmost_item)

#
if "start" in leftmost_item:
if "type" in leftmost_item and leftmost_item["type"] == "code":
_update("__________________________________________\n")
pauseSend[0] = True
elif "end" in leftmost_item:
if "type" in leftmost_item and leftmost_item["type"] == "code":
pauseSend[0] = False
elif isinstance(leftmost_item, str): _update(leftmost_item)
else:
content = "" if "content" not in leftmost_item else leftmost_item["content"]
if "content" in leftmost_item and not isinstance(leftmost_item["content"],str):
content = leftmost_item['content']['content'] if not isinstance(leftmost_item['content'],str) else leftmost_item['content']
if len(content) >0 and content[0] == "\n": content = content[1:]
if "type" in leftmost_item and leftmost_item["type"] in ["confirmation"]:
if len(content)>0 and content[0] != "<" and content[-1] != ">": content = "<code>"+content+ "</code>"
_update(content+"<h4> Would you like to run this code? (Y/N)</h4>"
+"<span style=\"color: grey;\"> You can also edit it before accepting</span><br>__________________________________________<br></x>")
elif "type" in leftmost_item and leftmost_item["type"] == 'console':
if len(content)>0 and content != "\n":
if debug: print(f"::: content :::{content}:::")
if content[0] != "<" and content[-1] != ">": content = "<code>"+content+ "</code>"
_update(f"<h3>OUTPUT:</h3>{content}<br>")
else:
_update(leftmost_item)
time.sleep(0.1)

from threading import Thread
update_queue_thread = Thread(target=update_queue)
update_queue_thread.start()
# ↑ Start Async Thread to Process Chunks Before streaming out
#______________________________________

# Run tests, one after the other
def test_async_input(tests):
for i, answer, code_revision in tests:
# Wait {i} seconds
while(i>0):
if i%5==0: print(f"::: Testing Input:\"{answer}\" with code:{code_revision} in: {i} seconds")
time.sleep(1)
i-=1

## ↓ TRIGGER EXTERNAL INPUT
async_input_data["input"] = answer
async_input_data["code_revision"] = code_revision
## ↑ OPTIONAL CODE CHANGES

pass #print(" TEST DONE ", async_input_data)


## ↓ THIS IS OBJECT BEING WATCHED FOR TRIGGERING INPUT
async_input_data = {"input":None, "code_revision":None}
## ↑ CHANGING async_input_data["input"] WILL TRIGGER OI'S INPUT
if __name__ == "__main__":

## Test automatic external trigger for (Y/N/Other) + code revisions
tests = [
# seconds_to_wait, input_response, new_code_to_run
[20, "Y", "print('SUCCESS!!!!!!!!')"],
# [20,"N",None],
# [20,"print hello {username from host} instead", None],
]
Thread(target=test_async_input, args=[tests,]).start()

# Start OpenInterpreter
'''# Pass in stream_out_hook function, and async_input_data '''
interpreter.chat(stream_out = stream_out_hook, async_input = async_input_data)
19 changes: 14 additions & 5 deletions interpreter/core/core.py
Original file line number Diff line number Diff line change
Expand Up @@ -73,7 +73,8 @@ def __init__(
skills_path=None,
import_skills=False,
multi_line=False,
contribute_conversation=False
contribute_conversation=False,
stream_out=None,
):
# State
self.messages = [] if messages is None else messages
Expand All @@ -92,6 +93,7 @@ def __init__(
self.in_terminal_interface = in_terminal_interface
self.multi_line = multi_line
self.contribute_conversation = contribute_conversation
self.stream_out = stream_out

# Loop messages
self.force_task_completion = force_task_completion
Expand Down Expand Up @@ -143,7 +145,7 @@ def will_contribute(self):
overrides = self.offline or not self.conversation_history or self.disable_telemetry
return self.contribute_conversation and not overrides

def chat(self, message=None, display=True, stream=False, blocking=True):
def chat(self, message=None, display=True, stream=False, blocking=True, stream_out=None, async_input=None):
try:
self.responding = True
if self.anonymous_telemetry:
Expand All @@ -170,7 +172,14 @@ def chat(self, message=None, display=True, stream=False, blocking=True):
return self._streaming_chat(message=message, display=display)

# If stream=False, *pull* from the stream.
for _ in self._streaming_chat(message=message, display=display):
for chunk in self._streaming_chat(message=message, display=display, async_input=async_input):
# Send out the stream of incoming chunks
# This is useful if you want to use OpenInterpreter from a different interface
if self.debug: print(f" ::: Streaming out: {chunk}")
if stream_out: stream_out(chunk) # Passed stream_out paramater takes priority over self.stream_out
elif self.stream_out: self.stream_out(chunk)

# if not streaming_out, then just *pull* from the stream
pass

# Return new messages
Expand All @@ -193,13 +202,13 @@ def chat(self, message=None, display=True, stream=False, blocking=True):

raise

def _streaming_chat(self, message=None, display=True):
def _streaming_chat(self, message=None, display=True, async_input = None):
# Sometimes a little more code -> a much better experience!
# Display mode actually runs interpreter.chat(display=False, stream=True) from within the terminal_interface.
# wraps the vanilla .chat(display=False) generator in a display.
# Quite different from the plain generator stuff. So redirect to that
if display:
yield from terminal_interface(self, message)
yield from terminal_interface(self, message, async_input=async_input)
return

# One-off message
Expand Down
7 changes: 6 additions & 1 deletion interpreter/core/respond.py
Original file line number Diff line number Diff line change
Expand Up @@ -143,7 +143,6 @@ def respond(interpreter):
# Is this language enabled/supported?
if interpreter.computer.terminal.get_language(language) == None:
output = f"`{language}` disabled or not supported."

yield {
"role": "computer",
"type": "console",
Expand Down Expand Up @@ -175,6 +174,12 @@ def respond(interpreter):
# We need to tell python what we (the generator) should do if they exit
break

# Check if the code was changed.
# Gives user a chance to manually edit the code before execution
if (interpreter.messages[-1]["type"] == "code" and code != interpreter.messages[-1]["content"]) or (interpreter.messages[-2]["type"] == "code" and code != interpreter.messages[-2]["content"]):
print("(Code has been modified)")
code = interpreter.messages[-1]["content"] if interpreter.messages[-1]["type"] == "code" else interpreter.messages[-2]["content"]

# don't let it import computer — we handle that!
if interpreter.computer.import_computer_api and language == "python":
code = code.replace("import computer\n", "pass\n")
Expand Down