Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

使用 ollama 本地模型时报错 #1274

Open
mofanx opened this issue May 17, 2024 · 1 comment
Open

使用 ollama 本地模型时报错 #1274

mofanx opened this issue May 17, 2024 · 1 comment

Comments

@mofanx
Copy link

mofanx commented May 17, 2024

Describe the bug

`interpreter --local
:88: SyntaxWarning: "is" with a literal. Did you mean "=="?

▌ Open Interpreter is compatible with several local model providers.

[?] What one would you like to use?:

Ollama
Llamafile
LM Studio
Jan

5 Ollama models found. To download a new model, run ollama run , then start a new interpreter session.

For a full list of downloadable models, check out https://ollama.com/library

[?] Select a downloaded Ollama model::
failed
NAME
qwen
llama3

codeqwen

Using Ollama model: codeqwen


Traceback (most recent call last):
File "E:\Program Files\Python\Python311\Lib\site-packages\interpreter\core\llm\llm.py", line 229, in fixed_litellm_completions
yield from litellm.completion(**params)
File "E:\Program Files\Python\Python311\Lib\site-packages\litellm\llms\ollama.py", line 297, in ollama_completion_stream
raise e
File "E:\Program Files\Python\Python311\Lib\site-packages\litellm\llms\ollama.py", line 256, in ollama_completion_stream
status_code=response.status_code, message=response.text
^^^^^^^^^^^^^
File "E:\Program Files\Python\Python311\Lib\site-packages\httpx_models.py", line 574, in text
content = self.content
^^^^^^^^^^^^
File "E:\Program Files\Python\Python311\Lib\site-packages\httpx_models.py", line 568, in content
raise ResponseNotRead()
httpx.ResponseNotRead: Attempted to access streaming response content, without having called read().

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "E:\Program Files\Python\Python311\Lib\site-packages\interpreter\core\respond.py", line 69, in respond
for chunk in interpreter.llm.run(messages_for_llm):
File "E:\Program Files\Python\Python311\Lib\site-packages\interpreter\core\llm\llm.py", line 201, in run
yield from run_text_llm(self, params)
File "E:\Program Files\Python\Python311\Lib\site-packages\interpreter\core\llm\run_text_llm.py", line 20, in run_text_llm
for chunk in llm.completions(**params):
File "E:\Program Files\Python\Python311\Lib\site-packages\interpreter\core\llm\llm.py", line 232, in fixed_litellm_completions
raise first_error
File "E:\Program Files\Python\Python311\Lib\site-packages\interpreter\core\llm\llm.py", line 213, in fixed_litellm_completions
yield from litellm.completion(**params)
File "E:\Program Files\Python\Python311\Lib\site-packages\litellm\llms\ollama.py", line 297, in ollama_completion_stream
raise e
File "E:\Program Files\Python\Python311\Lib\site-packages\litellm\llms\ollama.py", line 256, in ollama_completion_stream
status_code=response.status_code, message=response.text
^^^^^^^^^^^^^
File "E:\Program Files\Python\Python311\Lib\site-packages\httpx_models.py", line 574, in text
content = self.content
^^^^^^^^^^^^
File "E:\Program Files\Python\Python311\Lib\site-packages\httpx_models.py", line 568, in content
raise ResponseNotRead()
httpx.ResponseNotRead: Attempted to access streaming response content, without having called read().

    Python Version: 3.11.8
    Pip Version: 24.0
    Open-interpreter Version: cmd: Open Interpreter 0.2.5 New Computer Update

, pkg: 0.2.5
OS Version and Architecture: Windows-10-10.0.22631-SP0
CPU Info: AMD64 Family 23 Model 96 Stepping 1, AuthenticAMD
RAM Info: 15.37 GB, used: 9.25, free: 6.12

    # Interpreter Info

    Vision: False
    Model: ollama/codeqwen
    Function calling: False
    Context window: 8000
    Max tokens: 1200

    Auto run: False
    API base: None
    Offline: True

    Curl output: Not local

    # Messages

    System Message:

You are Open Interpreter, a world-class programmer that can execute code on the user's machine.
First, list all of the information you know related to the user's request.
Next, write a plan. Always recap the plan between each code block (you have extreme short-term memory loss, so you need to recap the plan between each message block to retain it).
The code you write must be able to be executed as is. Invalid syntax will cause a catastrophic failure. Do not include the language of the code in the response.
When you execute code, it will be executed on the user's machine. The user has given you full and complete permission to execute any code necessary to complete the task. Execute the code.
You can access the internet. Run any code to achieve the goal, and if at first you don't succeed, try again and again.
You can install new packages.
When a user refers to a filename, they're likely referring to an existing file in the directory you're currently executing code in.
Write messages to the user in Markdown.
In general, try to make plans with as few steps as possible. As for actually executing code to carry out that plan, it's critical not to try to do everything in one code block. You should try something, print information about it, then continue from there in tiny, informed steps. You will never get it on the first try, and attempting it in one go will often lead to errors you cant see.
You are capable of any task.
Once you have accomplished the task, ask the user if they are happy with the result and wait for their response. It is very important to get feedback from the user.
The user will tell you the next task after you ask them.

    {'role': 'user', 'type': 'message', 'content': '?'}

Traceback (most recent call last):
File "E:\Program Files\Python\Python311\Lib\site-packages\interpreter\core\llm\llm.py", line 229, in fixed_litellm_completions
yield from litellm.completion(**params)
File "E:\Program Files\Python\Python311\Lib\site-packages\litellm\llms\ollama.py", line 297, in ollama_completion_stream
raise e
File "E:\Program Files\Python\Python311\Lib\site-packages\litellm\llms\ollama.py", line 256, in ollama_completion_stream
status_code=response.status_code, message=response.text
^^^^^^^^^^^^^
File "E:\Program Files\Python\Python311\Lib\site-packages\httpx_models.py", line 574, in text
content = self.content
^^^^^^^^^^^^
File "E:\Program Files\Python\Python311\Lib\site-packages\httpx_models.py", line 568, in content
raise ResponseNotRead()
httpx.ResponseNotRead: Attempted to access streaming response content, without having called read().

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "E:\Program Files\Python\Python311\Lib\site-packages\interpreter\core\respond.py", line 69, in respond
for chunk in interpreter.llm.run(messages_for_llm):
File "E:\Program Files\Python\Python311\Lib\site-packages\interpreter\core\llm\llm.py", line 201, in run
yield from run_text_llm(self, params)
File "E:\Program Files\Python\Python311\Lib\site-packages\interpreter\core\llm\run_text_llm.py", line 20, in run_text_llm
for chunk in llm.completions(**params):
File "E:\Program Files\Python\Python311\Lib\site-packages\interpreter\core\llm\llm.py", line 232, in fixed_litellm_completions
raise first_error
File "E:\Program Files\Python\Python311\Lib\site-packages\interpreter\core\llm\llm.py", line 213, in fixed_litellm_completions
yield from litellm.completion(**params)
File "E:\Program Files\Python\Python311\Lib\site-packages\litellm\llms\ollama.py", line 297, in ollama_completion_stream
raise e
File "E:\Program Files\Python\Python311\Lib\site-packages\litellm\llms\ollama.py", line 256, in ollama_completion_stream
status_code=response.status_code, message=response.text
^^^^^^^^^^^^^
File "E:\Program Files\Python\Python311\Lib\site-packages\httpx_models.py", line 574, in text
content = self.content
^^^^^^^^^^^^
File "E:\Program Files\Python\Python311\Lib\site-packages\httpx_models.py", line 568, in content
raise ResponseNotRead()
httpx.ResponseNotRead: Attempted to access streaming response content, without having called read().

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "", line 198, in _run_module_as_main
File "", line 88, in run_code
File "E:\Program Files\Python\Python311\Scripts\interpreter.exe_main
.py", line 7, in
File "E:\Program Files\Python\Python311\Lib\site-packages\interpreter\terminal_interface\start_terminal_interface.py", line 453, in main
start_terminal_interface(interpreter)
File "E:\Program Files\Python\Python311\Lib\site-packages\interpreter\terminal_interface\start_terminal_interface.py", line 427, in start_terminal_interface
interpreter.chat()
File "E:\Program Files\Python\Python311\Lib\site-packages\interpreter\core\core.py", line 166, in chat
for _ in self._streaming_chat(message=message, display=display):
File "E:\Program Files\Python\Python311\Lib\site-packages\interpreter\core\core.py", line 195, in _streaming_chat
yield from terminal_interface(self, message)
File "E:\Program Files\Python\Python311\Lib\site-packages\interpreter\terminal_interface\terminal_interface.py", line 133, in terminal_interface
for chunk in interpreter.chat(message, display=False, stream=True):
File "E:\Program Files\Python\Python311\Lib\site-packages\interpreter\core\core.py", line 234, in _streaming_chat
yield from self._respond_and_store()
File "E:\Program Files\Python\Python311\Lib\site-packages\interpreter\core\core.py", line 282, in _respond_and_store
for chunk in respond(self):
File "E:\Program Files\Python\Python311\Lib\site-packages\interpreter\core\respond.py", line 115, in respond
raise Exception(
Exception: Error occurred. Attempted to access streaming response content, without having called read().`

Reproduce

1、当使用 ollama 本地模型运行open-interpreter时就会报错

Expected behavior

1、期望能够得到解答,如何解决这一报错。

Screenshots

No response

Open Interpreter version

open-interpreter Version: 0.2.5

Python version

Python 3.11.8

Operating System name and version

Windows 11 家庭中文版 22631.3593

Additional context

No response

@z82134359
Copy link

试一下小一点的模型,我也碰到了这个问题,这个问题似乎和模型的响应速度有关,ollama生成速度慢就会报这个

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants