You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
A common issue with LLM is asking a 'general' question about a topic that a model is not trained on or has no or little information, often it will either fill gaps creatively when coming up with an answer or state something that is plainly false. It would be great if there was a stop-gap measure where if the model has incomplete data or no data: the response will say (w/red text or cyan)"I do not know" if it completely lacks data on the question. Or only part of the data "I have incomplete data but what I do have on the topic is x, however you could try getting more information from these sources". Also if the question by the user is too broad, to ask user to redefine or narrow the question, this could save time computing wise too.
This feature could give better credibility to a model(s) if it wont switch to creative mode when it doesn't know something or has incomplete data and just states it does not know or warns that its answer might be incorrect. At least then we can seek/check other sources for an answer.
I have a feeling this suggestion might be, if it was easy to implement it would be already a feature. But I thought it is worth suggesting it, because this is one major issue I have when seeking answers to questions, not knowing how accurate the answer is. Maybe this is not possible because of the sheer scope of questions and topics.
The text was updated successfully, but these errors were encountered:
Feature Request
A common issue with LLM is asking a 'general' question about a topic that a model is not trained on or has no or little information, often it will either fill gaps creatively when coming up with an answer or state something that is plainly false. It would be great if there was a stop-gap measure where if the model has incomplete data or no data: the response will say (w/red text or cyan)"I do not know" if it completely lacks data on the question. Or only part of the data "I have incomplete data but what I do have on the topic is x, however you could try getting more information from these sources". Also if the question by the user is too broad, to ask user to redefine or narrow the question, this could save time computing wise too.
This feature could give better credibility to a model(s) if it wont switch to creative mode when it doesn't know something or has incomplete data and just states it does not know or warns that its answer might be incorrect. At least then we can seek/check other sources for an answer.
I have a feeling this suggestion might be, if it was easy to implement it would be already a feature. But I thought it is worth suggesting it, because this is one major issue I have when seeking answers to questions, not knowing how accurate the answer is. Maybe this is not possible because of the sheer scope of questions and topics.
The text was updated successfully, but these errors were encountered: