Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: support vision models from xinference #4094

Merged
merged 5 commits into from May 7, 2024

Conversation

Minamiyama
Copy link
Contributor

Description

Please include a summary of the change and which issue is fixed. Please also include relevant motivation and context. List any dependencies that are required for this change.

Fixes # (issue)

Type of Change

Please delete options that are not relevant.

  • New feature (non-breaking change which adds functionality)

How Has This Been Tested?

  • launch a vision model on xinference like qwen-vl-chat
  • add a model as a provider
  • in workflow mode, select the vision model in LLM node, an eye vison mark will be shown

image
image

@dosubot dosubot bot added size:L This PR changes 100-499 lines, ignoring generated files. ⚙️ feat:model-runtime labels May 5, 2024
@takatost takatost requested a review from Yeuoly May 7, 2024 08:28
Copy link
Collaborator

@Yeuoly Yeuoly left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@dosubot dosubot bot added the lgtm This PR has been approved by a maintainer label May 7, 2024
@takatost takatost merged commit f361c70 into langgenius:main May 7, 2024
7 checks passed
@sunhaha123
Copy link

Great job! How to triggle it? Git pull haved.
image
image

@Yeuoly
Copy link
Collaborator

Yeuoly commented May 7, 2024

Great job! How to triggle it? Git pull haved. image image

just setup qwen-chat-vl again in Dify, you will reach it :).

@sunhaha123
Copy link

sunhaha123 commented May 8, 2024

@Minamiyama xorbitsai/inference#1425 Should I update xinference first? Only update dify, I still can't see the "eye" in "qwen-chat-vl". If using openai api comaptiblie, it showing that not support stream.

Updating xinference from source code seem not smoothly because of some new changes.

@Minamiyama
Copy link
Contributor Author

@Minamiyama xorbitsai/inference#1425 Should I update xinference first? Only update dify, I still can't see the "eye" in "qwen-chat-vl". If using openai api comaptiblie, it showing that not support stream.

Updating xinference from source code seem not smoothly because of some new changes.

remove the old qwen-chat-vl from xinference provider, and add it as a totally new one, you should see the eye mark. And try to create a new workflow, just in case

rennokki pushed a commit to rennokki/dify that referenced this pull request May 9, 2024
@sunhaha123
Copy link

I have find out, the docker image is out of date.

@takatost takatost mentioned this pull request May 9, 2024
evnydd0sf pushed a commit to evnydd0sf/dify that referenced this pull request May 10, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
⚙️ feat:model-runtime lgtm This PR has been approved by a maintainer size:L This PR changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants