You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
fromllama_cppimportLlamallm_model=Llama(..., chat_format="llama-3", ...)
user_request="what is the meaning of life?"response=llm_model.create_chat_completion(
messages=[{
"role": "user",
"content": user_request,
}]
)
Will this generation work properly or do I need to wrap user_request in LLaMa 3's system prompt?
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Suppose I have the following code.
Will this generation work properly or do I need to wrap
user_request
in LLaMa 3's system prompt?Their proposed system prompt for reference:
More or less relative discussions I found:
Alas, no definitive answer there.
Beta Was this translation helpful? Give feedback.
All reactions