Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

No feeback for the user while correcting code #12

Open
pnmartinez opened this issue May 3, 2024 · 1 comment
Open

No feeback for the user while correcting code #12

pnmartinez opened this issue May 3, 2024 · 1 comment

Comments

@pnmartinez
Copy link
Contributor

Problem

When code corrections are triggered, the user is left waiting without any feedback on CLI about current status of the process (image below).

Solution

Output from the LLMs while correcting the code in between corrections would prevent the user from thinking the process has halted or crashed (green region in image below).

This is particularly troublesome doing inference on slow setups (such as local LLMs on laptops, like Llama3 8b).

imagen

@pgalko
Copy link
Owner

pgalko commented May 5, 2024

Good point. What it does during that gap is developing a new version of the code, incorporating the fix. We can easily enable a stream to terminal by just changing the line 510 in bambooai.py module to llm_response = self.llm_stream(self.log_and_call_manager,code_messages,agent=agent, chain_id=self.chain_id) , but it will make the terminal window really busy/clattered. I will try to think of something.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants