You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In LangChain4j version 0.30.0 (Quarkus integration 0.11.0), when letting an AiService return a specific type, eg. List, the instruction that makes the LLM use the right format gets added to the UserMessage instead of the SystemMessage, like this:
[{"role":"system",
"content":"Split the user feedback into coherent parts of literal user text that treat a similar topic, that can then be addressed separately.\nIf the user treats the same topic in multiple parts of the feedback, make sure to group them together. If the feedback is about only one topic, return only one item.\n"},
{"role":"user",
"content":"great food!\nYou must put every item on a separate line."}]
(in this example, I specified the SystemMessage, and '\nYou must put every item on a separate line.' got added by LangChain4j.
By adding this instruction to the UserMessage, half of the time, the model will return two feedback items instead of the expected one:
great food!
You must put every item on a separate line.
I think this would be solved by adding the instruction as last part of the SystemMessage.
I'm not entirely sure how this would interplay with memory, although I think the typical use case when AiServices return java objects other than strings, is that they have no memory, and if they do, every answer will be of the form specified in the SystemMessage instruction, so I expect that to work fine.
The same thing happens for AiServices returning POJOs, the JSON format it should answer in is also attached to the UserMessage instead of SystemMessage.
The text was updated successfully, but these errors were encountered:
langchain4j
changed the title
[BUG] Format instructions for AiServices return type should be added in SystemMessage instead of UserMessage
[FEATURE] An option to add format instructions for AiServices return type in SystemMessage instead of UserMessage
May 10, 2024
Just tested with the same example as above, but adding 'You must put every item on a separate line' at the end of the SystemMessage (without AiService):
in case of multiple items, the result is correct, model returns eg.
Great food!
The room was a bit cold though
In case of a single item (UserMessage 'Great food!), it doesn't behave correctly at all, model return (mutliple times):
Thank you for your positive feedback on the food!
I can probably prompt it in a way that it does the right thing. I think that adding 'You must put every item on a separate line' at the end of the SystemMessage will usually not cause an issue (but adding it to UserMessage does). In any case it would be nice if one can overwrite or concatenate something before or behind the automatic LangChain4j addition.
It might be relatively easy to address when touching the code for #904
Related, I tested the TextUtils summarise method in our tutorial examples, setting n = 1. It has been returning any number of items (from 2 to 4) but not 1. I have observed an occasional similar issue during the labs, where it would return 2 bullet points instead of 3 (exceptionally). Definitely gpt-3.5 seems to be struggling with returning just one item.
In LangChain4j version 0.30.0 (Quarkus integration 0.11.0), when letting an AiService return a specific type, eg. List, the instruction that makes the LLM use the right format gets added to the UserMessage instead of the SystemMessage, like this:
[{"role":"system",
"content":"Split the user feedback into coherent parts of literal user text that treat a similar topic, that can then be addressed separately.\nIf the user treats the same topic in multiple parts of the feedback, make sure to group them together. If the feedback is about only one topic, return only one item.\n"},
{"role":"user",
"content":"great food!\nYou must put every item on a separate line."}]
(in this example, I specified the SystemMessage, and '\nYou must put every item on a separate line.' got added by LangChain4j.
By adding this instruction to the UserMessage, half of the time, the model will return two feedback items instead of the expected one:
I think this would be solved by adding the instruction as last part of the SystemMessage.
I'm not entirely sure how this would interplay with memory, although I think the typical use case when AiServices return java objects other than strings, is that they have no memory, and if they do, every answer will be of the form specified in the SystemMessage instruction, so I expect that to work fine.
The same thing happens for AiServices returning POJOs, the JSON format it should answer in is also attached to the UserMessage instead of SystemMessage.
The text was updated successfully, but these errors were encountered: