You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It is easy to monitor LLM Security tools in Langfuse as scores. It'd be helpful to include docs, cookbook and a blog post on how to do this.
Design
Langfuse is running async when integrating with applications. We do not want to have this as a native feature but want to help people understand how to use open source solutions for LLM security and then monitor it in Langfuse
Data model in tracing: Safety check can be a span
Helpful to measure how long the safety check take as they are most likely blocking
Attach scores to the safety check span for each safety check that's done
Library to use in examples
While this will work with any library (it just creates a score which is logged to Langfuse), we'll need to reference one specific library across examples. We can always add other libraries or switch later on.
Goal
It is easy to monitor LLM Security tools in Langfuse as scores. It'd be helpful to include docs, cookbook and a blog post on how to do this.
Design
Langfuse is running async when integrating with applications. We do not want to have this as a native feature but want to help people understand how to use open source solutions for LLM security and then monitor it in Langfuse
Library to use in examples
While this will work with any library (it just creates a score which is logged to Langfuse), we'll need to reference one specific library across examples. We can always add other libraries or switch later on.
Options (not exhaustive, see list for more):
Let's use LLM Guard for the examples across cookbooks and blog post.
Dimensions
Output
The text was updated successfully, but these errors were encountered: