You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When code corrections are triggered, the user is left waiting without any feedback on CLI about current status of the process (image below).
Solution
Output from the LLMs while correcting the code in between corrections would prevent the user from thinking the process has halted or crashed (green region in image below).
This is particularly troublesome doing inference on slow setups (such as local LLMs on laptops, like Llama3 8b).
The text was updated successfully, but these errors were encountered:
Good point. What it does during that gap is developing a new version of the code, incorporating the fix. We can easily enable a stream to terminal by just changing the line 510 in bambooai.py module to llm_response = self.llm_stream(self.log_and_call_manager,code_messages,agent=agent, chain_id=self.chain_id) , but it will make the terminal window really busy/clattered. I will try to think of something.
Problem
When code corrections are triggered, the user is left waiting without any feedback on CLI about current status of the process (image below).
Solution
Output from the LLMs while correcting the code in between corrections would prevent the user from thinking the process has halted or crashed (green region in image below).
This is particularly troublesome doing inference on slow setups (such as local LLMs on laptops, like Llama3 8b).
The text was updated successfully, but these errors were encountered: