Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FEATURE] An option to add format instructions for AiServices return type in SystemMessage instead of UserMessage #1086

Open
LizeRaes opened this issue May 10, 2024 · 3 comments
Labels
enhancement New feature or request P2 High priority

Comments

@LizeRaes
Copy link
Collaborator

In LangChain4j version 0.30.0 (Quarkus integration 0.11.0), when letting an AiService return a specific type, eg. List, the instruction that makes the LLM use the right format gets added to the UserMessage instead of the SystemMessage, like this:

[{"role":"system",
"content":"Split the user feedback into coherent parts of literal user text that treat a similar topic, that can then be addressed separately.\nIf the user treats the same topic in multiple parts of the feedback, make sure to group them together. If the feedback is about only one topic, return only one item.\n"},
{"role":"user",
"content":"great food!\nYou must put every item on a separate line."}]

(in this example, I specified the SystemMessage, and '\nYou must put every item on a separate line.' got added by LangChain4j.

By adding this instruction to the UserMessage, half of the time, the model will return two feedback items instead of the expected one:

  1. great food!
  2. You must put every item on a separate line.

I think this would be solved by adding the instruction as last part of the SystemMessage.
I'm not entirely sure how this would interplay with memory, although I think the typical use case when AiServices return java objects other than strings, is that they have no memory, and if they do, every answer will be of the form specified in the SystemMessage instruction, so I expect that to work fine.

The same thing happens for AiServices returning POJOs, the JSON format it should answer in is also attached to the UserMessage instead of SystemMessage.

@LizeRaes LizeRaes added the bug Something isn't working label May 10, 2024
@langchain4j
Copy link
Owner

Related: #793

@langchain4j langchain4j changed the title [BUG] Format instructions for AiServices return type should be added in SystemMessage instead of UserMessage [FEATURE] An option to add format instructions for AiServices return type in SystemMessage instead of UserMessage May 10, 2024
@langchain4j langchain4j added enhancement New feature or request and removed bug Something isn't working labels May 10, 2024
@langchain4j
Copy link
Owner

Hi @LizeRaes thanks! Did you test if setting instructions in system message works better?

@LizeRaes
Copy link
Collaborator Author

Just tested with the same example as above, but adding 'You must put every item on a separate line' at the end of the SystemMessage (without AiService):
in case of multiple items, the result is correct, model returns eg.

  • Great food!
  • The room was a bit cold though

In case of a single item (UserMessage 'Great food!), it doesn't behave correctly at all, model return (mutliple times):
Thank you for your positive feedback on the food!

I can probably prompt it in a way that it does the right thing. I think that adding 'You must put every item on a separate line' at the end of the SystemMessage will usually not cause an issue (but adding it to UserMessage does). In any case it would be nice if one can overwrite or concatenate something before or behind the automatic LangChain4j addition.

It might be relatively easy to address when touching the code for #904

Related, I tested the TextUtils summarise method in our tutorial examples, setting n = 1. It has been returning any number of items (from 2 to 4) but not 1. I have observed an occasional similar issue during the labs, where it would return 2 bullet points instead of 3 (exceptionally). Definitely gpt-3.5 seems to be struggling with returning just one item.

@langchain4j langchain4j added the P2 High priority label May 13, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request P2 High priority
Projects
None yet
Development

No branches or pull requests

2 participants