Skip to content

What is the suggested best way to filter prompt responses by llm? #160

Answered by ianarawjo
jonborchardt asked this question in Q&A
Discussion options

You must be logged in to vote

You're right, this is hard to do and I've been meaning to implement this. Chat Turns lets you do something like this, but it doesn't work atm for prompt nodes (the "continue using the same LLM" toggle in Chat Turns)

Currently, you can work around this by having parallel chains, with one model on each chain. Then, compare at the end.
As long as the input(s) are the same, the prompt variables should work similarly in inspectors and evaluators.

Replies: 1 comment 2 replies

Comment options

You must be logged in to vote
2 replies
@ianarawjo
Comment options

@jonborchardt
Comment options

Answer selected by jonborchardt
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants