Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: detailed model info #1486

Closed
4 tasks done
sammcj opened this issue Apr 10, 2024 · 6 comments
Closed
4 tasks done

feat: detailed model info #1486

sammcj opened this issue Apr 10, 2024 · 6 comments

Comments

@sammcj
Copy link
Contributor

sammcj commented Apr 10, 2024

Bug Report

Description

Bug Summary:

The UI doesn't show which server is providing a given model.

For example, say you have:

  1. An OpenAI backend
  2. An OpenAI compatible backend (e.g. Groq, Openrouter etc...)
  3. A sever running an OpenAI compatible API
  4. A local server running an OpenAI compatible API
  5. One more more Ollama servers

And you're selecting which model to use - you have no idea which host you're going to be performing the inference on.

For example, in the following screenshot these models could be provided by OpenAI, OpenRouter, Ollama or some other proxy - and if there's multiple accounts/keys for a service - which one is in use?

image SCR-20240410-oncj

Steps to Reproduce:

Add multiple providers with the same or similar model names and try to select a model to use.

Another nice option might be to give servers a 'name' attribute and show that.

Expected Behaviour:

In the models drop down, I'd expect next to the model name to have the server shown.

For example:

modellist

servers

Actual Behaviour:

No model server is shown.

Environment

  • Operating System: Linux/Docker/NA
  • Browser (if applicable): Firefox 124/NA

Reproduction Details

Confirmation:

  • I have read and followed all the instructions provided in the README.md.
  • I am on the latest version of both Open WebUI and Ollama.
  • I have included the browser console logs.
  • I have included the Docker container logs.

Logs and Screenshots

Screenshots (if applicable):

SCR-20240410-omnh SCR-20240410-omlv

Installation Method

n/a

Additional Information

n/a

Note

n/a

@justinh-rahb
Copy link
Collaborator

justinh-rahb commented Apr 10, 2024

Ya, tbh I'd kinda like if the "Assistant" profile picture in chats were instead the logo we set for a given model (defaults can be inferred by API base URLs and/or model names in many cases) too, along with other things that have been requested by others such as being able to edit names, set rate limits (per user), and other more granular controls for model and API access.

@sammcj
Copy link
Contributor Author

sammcj commented Apr 10, 2024

Using the Assistant profile picture would tie in nicely if Assistants end up supporting multiple models/servers as well ;)

@abqareno
Copy link

i noticed also that [system prompt] is not based on which model in use, instead of is just one per chat. so we can't switch models in between.

@sammcj
Copy link
Contributor Author

sammcj commented Apr 13, 2024

@abqareno yeah I noticed that as well, I think that's worth logging a separate bug / feature request for though if you could?

@tjbck tjbck changed the title UI doesn't show which server provides model feat: detailed model info Apr 14, 2024
@tjbck
Copy link
Contributor

tjbck commented Apr 14, 2024

@abqareno blocker: #665

@tjbck
Copy link
Contributor

tjbck commented May 26, 2024

Closing in favour of #1655

@tjbck tjbck closed this as completed May 26, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants