Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] 服务在接受到一些截断的图像base64编码后,线程退出,导致服务异常,此时服务没有停止,但是新的post请求无法进来 #1602

Closed
1 of 2 tasks
LRHstudy opened this issue May 16, 2024 · 3 comments
Assignees

Comments

@LRHstudy
Copy link

Checklist

  • 1. I have searched related issues but cannot get the expected help.
  • 2. The bug has not been fixed in the latest version.

Describe the bug

线程退出,导致服务异常,此时服务没有停止,但是新的post请求无法进来,无法处理,缺少一些异常处理机制

Reproduction

命令行在8888端口启动qwenvl的服务,或者其他多模态模型服务:

下面是请求时传入截断的图像base64:
from openai import OpenAI
import base64
import time

client = OpenAI(
api_key="EMPTY",
base_url="http://127.0.0.1:8888/v1/",
)

image_path = "***.jpg"
with open(image_path, "rb") as image_file:
image = "data:image/jpeg;base64," + base64.b64encode(image_file.read()).decode('utf-8')

model_name = client.models.list().data[0].id
print(model_name)
st= time.time()
print_stream = True
stream = client.chat.completions.create(
messages=[
{
"role": "user",
"content": [
{
"type": "text",
"text": "请描述一下图像"
},
{
"type": "image_url",
"image_url": {
"url": image[:-100] #此处截断图像
}
},
]
},

],
model = model_name,
stream=print_stream,
max_tokens = 150,
top_p=0.2,

)

if not print_stream:
print(stream)
else:
for part in stream:
# print(part)
print(part.choices[0].delta.content or "", end="", flush=True)
print('process_time:{}'.format(round(time.time()-st, 3)))

Environment

lmdeploy:v4.0.0、v4.0.1

Error traceback

INFO:     127.0.0.1:38654 - "GET /v1/models HTTP/1.1" 200 OK
Exception in thread Thread-1:
Traceback (most recent call last):
  File "/usr/lib/python3.8/threading.py", line 932, in _bootstrap_inner
    self.run()
  File "/usr/lib/python3.8/threading.py", line 870, in run
    self._target(*self._args, **self._kwargs)
  File "/opt/lmdeploy/lmdeploy/vl/engine.py", line 80, in _work_thread
    self.loop.run_until_complete(self._forward_loop())
  File "/usr/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/opt/lmdeploy/lmdeploy/vl/engine.py", line 96, in _forward_loop
    outputs = self.forward(inputs)
  File "/opt/lmdeploy/lmdeploy/vl/engine.py", line 105, in forward
    outputs = self.model.forward(inputs)
  File "/opt/py38/lib/python3.8/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/opt/lmdeploy/lmdeploy/vl/model/qwen.py", line 41, in forward
    outputs = [x.convert('RGB') for x in images]
  File "/opt/lmdeploy/lmdeploy/vl/model/qwen.py", line 41, in <listcomp>
    outputs = [x.convert('RGB') for x in images]
  File "/opt/py38/lib/python3.8/site-packages/PIL/Image.py", line 922, in convert
    self.load()
  File "/opt/py38/lib/python3.8/site-packages/PIL/ImageFile.py", line 288, in load
    raise OSError(msg)
OSError: image file is truncated (2 bytes not processed)
@AllentDan
Copy link
Collaborator

有兴趣帮忙修一下不?

@LRHstudy
Copy link
Author

图像截断的添加下面代码可以修复:
from PIL import ImageFile
ImageFile.LOAD_TRUNCATED_IMAGES = True
异常的目前是检测到发生之后让服务退出,然后再靠一些工具让服务重启了

@AllentDan
Copy link
Collaborator

Hi, @LRHstudy may check if #1615 works for you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants