I made a janky-ish bash script that gives a LLM live access to a shell, calculator, notepad, time etc #4731
skidd-level-100
started this conversation in
Show and tell
Replies: 1 comment 1 reply
-
Can you show code? |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
edit I made a github:
github:
https://github.com/skidd-level-100/jankllama
Disclaimer:
this was written in like 5 mins before going to sleep, it is going to be terribly written.
I am still working on it but its very impressive so far, for example I have asked it to scan my network with nmap count the number of devices then multiply it by 4 and get the square root of that number (in similar wording to that) and it did it, chained commands together used my calculator function and got the right answer.
the way it has "live" time access is everytime the user inputs something the time is attached to your message, it also can run the date command (or any non-interactive bash/linux utility available in a container) on a whim, and of coarse its shell access is in a locked down resource limited read-only container(podman).
it taking notes well is in progress (works but needs more prompt engineering and maybe background automation)
it is pretty quick to run (slower than vanilla) has resume-able sessions (although slower the longer the ctx, not by a ton)
I'll most likely make a python version, but for now the easy system integration to linux is REALLY nice to have with bash.
how it works (rough 1 minute write up)
it runs in non interactive mode with reverse prompts to end it when it is done marking a function to run
example:
"
bot:
bla bla bla bla, I will run a shell command
CMD
"
due to llama cmd options '-r ' and it being non-interactive it ends the program passes the output to my bash script the script picks up the command runs it on a container (killing it after 3 seconds in case of loop) and passes the output back into the stored prompt with some fancy labeling, then re runs llama on the modified context and waits.
since its "non-interactive" the bot will pass a 'userinput into the screen after it needs feedback or more instructions, then the script adds some fancy labeling before and after your message, opens the stored output in your default terminal editor (this is nice for being able to edit anything it has/will say and fixing errors) once you save the file and close your editor the modified context gets passed back to llama. This is also nice because the bot can chain commands and functions together without user input interrupting.
anyways guys its pretty jank for now and built in 5~ hours (mostly prompt engineering the script itself is like 80 lines) or so but in theory when I release it the code quality wont totally suck.
let me know any functions you would like to see the LLM be able to pass arguments into it should be "easy" to implement them in a semi modular fashion.
Beta Was this translation helpful? Give feedback.
All reactions