Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feat]: Add Evaluation and Tuning Playground #132

Open
1 task
patcher99 opened this issue Mar 21, 2024 · 0 comments · May be fixed by #266
Open
1 task

[Feat]: Add Evaluation and Tuning Playground #132

patcher99 opened this issue Mar 21, 2024 · 0 comments · May be fixed by #266
Assignees
Labels
Client Issue related to OpenLIT Client 🚀 Feature New feature or request

Comments

@patcher99
Copy link
Contributor

🚀 What's the Problem?

Currently we cant experiment quickly and compare different models side-by-side on their response and related metrics like token usage, cost and latency

💡 Your Dream Solution

Add a playground where users can compare between different models from different providers side by siude on the same set of prompts.

🤔 Seen anything similar?

OpenAI Playground but think of it for multiple model providers and for testing a single prompt rather than the whole chat

🖼️ Pictures or Drawings

NA

👐 Want to Help Make It Happen?

  • Yes, I'd like to volunteer and help out with this!
@patcher99 patcher99 added ✋ Up for Grabs The issue is Up for Grabs 🚀 Feature New feature or request labels Mar 21, 2024
@patcher9 patcher9 added Client Issue related to OpenLIT Client and removed ✋ Up for Grabs The issue is Up for Grabs labels May 26, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Client Issue related to OpenLIT Client 🚀 Feature New feature or request
Projects
Status: In Progress
Development

Successfully merging a pull request may close this issue.

3 participants