Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reflection for agent #559

Open
afeezaziz opened this issue May 4, 2024 · 1 comment
Open

Reflection for agent #559

afeezaziz opened this issue May 4, 2024 · 1 comment

Comments

@afeezaziz
Copy link

How can I be using crewai to make the agents to reflect/learn from its own output? Do I have to use crewai in conjunction with langgraph?

Example case is a code monkey agent write a Python code, then the tester would test and check the errors. It will inform the code monkey agent of the error and it would rewrite it. Another example perhaps a writer agent that rewrite based on the feedback by editor agent.

@alexfazio
Copy link
Contributor

Do I have to use crewai in conjunction with langgraph?

That's correct. It may not be the only solution, but it is the most effective and well-tested one.

https://www.youtube.com/watch?v=5eYg1OcHm5k&t=132s

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants