You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I couldn't see any noticeable differences in response quality or response time across several runs, except for the minimal bump in prompt tokens from including the (must-contain "json") prompt suffix: "n\nGenerate your response in valid json format".
From the outside, I was unable to see how including the reponse_format parameter negatively impacted the quality and performance of instructor, but wanted to ask you what you think about this, as there might be some potential upside for instructor leveraging this parameter under the hood when using the "*-1106" model versions.
P.S. Amazing library, thank you.
importinstructorimportenumfrompydanticimportBaseModel, FieldfromopenaiimportOpenAIfromsrc.db.data_prod_managerimportDataProdManagerclient=instructor.patch(OpenAI())
classNaturalLanguageQueryProbability(int, enum.Enum):
"""Enumeration for how likely a user comment is a SQL Natural Language Query."""DEFINITELY_NOT_NLQ=1LIKELY_NOT_NLQ=2NEUTRAL_OR_UNSURE=3LIKELY_NLQ_=4DEFINITELY_NLQ_=5classNaturalLanguageQueryClassification(BaseModel):
"""Class for the classification of a user comment as a SQL Natural Language Query."""nlq_probability: NaturalLanguageQueryProbabilitynlq_confidence: float=Field(..., ge=0.0, le=1.0, description="Confidence in the classification of the user comment as a SQL Natural Language Query.")
defclassify_nlq(user_comment: str) ->NaturalLanguageQueryClassification:
"""Classify a user comment as a SQL Natural Language Query."""response=client.chat.completions.create(
model="gpt-3.5-turbo-1106",
response_model=NaturalLanguageQueryClassification,
response_format={ "type": "json_object" },
messages=[
{
"role": "user",
"content": f"Classify the following text:\n{user_comment}\n\nGenerate your response in valid json format."
}
]
)
returnresponseuser_message: str="Find chats where high Net Promoter Scores (NPS) correlate with specific user journey touchpoints."prediction=classify_nlq(user_message)
print(f"Completion Usage: {prediction._raw_response.usage}")
print(f"NLQ Likelihood Value: {prediction.nlq_probability.value}")
print(f"NLQ Likelihood Category: {prediction.nlq_probability.name}")
print(f"NLQ Likelihood Confidence: {prediction.nlq_confidence}")
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
I ran the below (albeit not very robust) experiment including/not including the new JSON mode parameter
response_format
.I couldn't see any noticeable differences in response quality or response time across several runs, except for the minimal bump in prompt tokens from including the (must-contain "json") prompt suffix: "n\nGenerate your response in valid json format".
From the outside, I was unable to see how including the
reponse_format
parameter negatively impacted the quality and performance ofinstructor
, but wanted to ask you what you think about this, as there might be some potential upside for instructor leveraging this parameter under the hood when using the "*-1106" model versions.P.S. Amazing library, thank you.
Beta Was this translation helpful? Give feedback.
All reactions