What he's saying is that everything should be idempotent, which may be possible for local only calls and filesystem snapshots, but anything doing a network call is outside the realm of this possibility. Such a system would need to spin up a local, accurate backend for any network call, execute the call, verify the results are not catastrophic and retry with a real call, but then we also introduce time caused uncertainty as the real system may drift enough from the expected state during the local validation. A fun thought experiment but science fiction IMHO.
dang, I think you're right, my mind branched off somewhere it seems. I was thinking of how operations can be executed multiple times (verification + actual result run) with effect being applied only once.
Some ~20 years someone gave me access to their server and I typed `rm -rf something ` instead of `rm -rf something`. I have been hyper paranoid about destructive commands ever since. Yesterday I wanted to setup a boot usb for bazzite on a machine with two nvme drives, but I kept checking multiple times that the usb drive is indeed at /dev/sda and nothing else could possible be that drive even though the SSD's were all on /dev/nvme0. Some hard lessons you never forget.
In my experience, that tends to just make the approval of specific file deletions reflexive.
The worst situation I've been in was running the classic 'rm -rf' from the root filesystem, several decades ago.
I was running a bootable distro, had mounted all filesystems but the one I was actually attempting to reformat and repurpose read-only, and the upshot was that I enjoyed the experience of seeing just what a system which has shell internals (not sure it was even full bash) and little else functions like. (I found that "echo *" is a good poor-man's 'ls'.) Then, having removed the filesystem I'd intended to remove in the first place (and a few more ... memory-only ... filesystems), I rebooted and continued.
What saved me was safeing all parts of the system save that which I was specifically acting on. Where I've had to perform similarly destructive commands elsewhere and since, I've made a habit of doing similarly, ensuring I'd had backups where necessary, triple-checking that what I wanted to annihilate was in fact what I was going to annihilate.
Among those practices:
I'll often move files or directories to a specific "DELETE_ME" directory, which 1) gives a few non-destructive checkpoints to destructive actions, 2) takes no system time or space (file / directory moves on the same filesystem don't involve copying or writing data other than the filesystem metadata), then review and finally delete those files.
I'll set all filesystems other than those I'm specifically performing surgery on to "read-only". This suffices for almost any file-oriented actions, though of course not filesystem or partition operations. ('dd' is the exception to file-oriented commands, though you'd have to be writing to a partition to cause problems.)
Rather than using dynamically-generated file lists (e.g., using shell globs, 'find | xargs', $(shell expansions), or similar techniques, I'll generate a one-off shell script to perform complex operations. This makes explicit all expansions and permits reviewing of operations before committing them.
I'll often log complex output so that I can review the operation and see if it ran as intended.
Return 0, but don’t do anything yet. Fire a cron with an N-minute sleep that destroys the FS on expiry. Also, rewrite various ZFS tooling to lie about the consumed space, and confound the user with random errors if they try to use the still-allocated space.
Huh, and I'm here running llama 3 locally (and claude.ai for less complex stuff), asking well formatted and specific questions and still adjusting the output before implementing it.
Besides I need .sh scripts not just cli completion.
But this reminds me of warp. Gonna have to give it a spin in the morning.
I've been using https://github.com/tom-doerr/zsh_codex with gpt4-o and it saves a lot of strokes as compared to github copilot cli to query, since I just have to press ctrl-x in addition to the prompt.
Magic-cli also seems to be using same workflow as github copilot, so I'm not rushing to use it.
This is nice. I've been taking Termium[0] for a spin and it's been pretty great for the most part, but the Rumsfeld-complete always-on autosuggest/copilot UX they're aiming for does feel like a bit of a compromise.
On occasions when I do know what I don't know, and want to specifically opt in, this looks perfect.
Ah, forgot to include that! That's a way to edit any of my functions via "edit <functionname>" and it drops you right on the correct line in your $EDITOR of choice. Otherwise it defaults to passing it into your editor (ostensibly a path).
needs() {
[ -v EDIT ] && unset EDIT && edit_function "${FUNCNAME[0]}" "$BASH_SOURCE" && return;
local bin="$1";
shift;
command -v "$bin" > /dev/null 2>&1 || {
printf "%s is required but it's not installed or in PATH; %s\n" "$bin" "$*" 1>&2;
return 1
}
}
contains() {
[ -v EDIT ] && unset EDIT && edit_function "${FUNCNAME[0]}" "$BASH_SOURCE" && return;
local word;
for word in $1;
do
if [[ "$word" == "$2" ]]; then
return 0;
fi;
done;
return 1
}
edit_function() {
[ -v EDIT ] && unset EDIT && edit_function "${FUNCNAME[0]}" "$BASH_SOURCE" && return;
needs rg "please install ripgrep!";
local function_name="$1";
local function_name="${function_name//\?/\\?}";
local file="$2";
local fl=$(rg -n -e "${function_name} *\(\) *\{" -e "function +${function_name}(?: *\(\))? *\{" "$file" | tail -n1 | cut -d: -f1);
$EDITOR "$file":$fl
}
edit() {
[ -v EDIT ] && unset EDIT && edit_function "${FUNCNAME[0]}" "$BASH_SOURCE" && return;
if contains "$(functions)" $1; then
EDIT=1 $1;
else
$EDITOR "$@";
fi
}
Once you have those set in your environment, and EDITOR points to whatever editor you prefer, you can simply add the following line to the top of any bash function you define and make it editable-in-place basically:
I use the [ -v variablename ] pattern to detect whether it's set or not so that things like EDIT=1 and EDIT=true will work the same way, but I've also seen ((EDIT)) used, which for values of 1 gives a return code of 0 (making that expression true) otherwise returns a fail, but that only works if you use 1 or 0 to designate "true" and "false" for switches... and it's of course confusing that you need to reverse those in Bash logic which works off return codes and not actual values
My version called "gencmd" also has a web page, supports multiple models, and also has org+groups support. Please try it out - would love your feedback. https://gencmd.com/
Can I ask why it's so complicated? I made something similar about a year ago and it's less that 150 lines of Python. Gives you an explanation, option to run it with/without sudo, pretty colors, etc.
I guess I'm not very familiar with Rust but it just seems like a lot for what it does.
It isn’t streaming the ollama output so it feels slow (~3 words/second on a 3090 with the defaults). Using ollama directly streams within a second and you can kill it early. I don’t understand the UX of looping responses to the same question either. This does not feel like magic.
Woah, the shell features are super similar.
Honestly was not familiar with this project, looks great (and ambitious). I'll try it out. Thanks for the share.
I'm not affiliated with it, but I've been using the Warp terminal program for a few months now and suspect that if you're interested in this kind of thing, you might like that too.
In short, besides the obvious AI stuff, which works well:
- You can edit the command line as though it's in a GUI program (including with mouse, etc) instead of it being inside the terminal where you need to use different keybindings and no mouse.
- When in a shell, instead of your window being one long stream of text, each command and each output is a discrete area, so it's easier to, say, select the whole output of a command.
Warp also has a cool looking cataloging feature where commands can be bundled up and shared with your co-workers. Seems a good solution for sharing those dark arts folks tend to build up over time.
> Seems a good solution for sharing those dark arts folks tend to build up over time
This is one of the things I most _dislike_ about it. Don't incentivize hording those useful tools in yet-another-silo, get them out into a shared code package!
Fair! I’d not considered that aspect, but you’re right, serializing these into a git repo would be the correct solution here.
I think the integration is important though; I’ve vented plenty of steam at co-workers who don’t look at the COMMANDS.md / README.md / etc in a repo. It being auto imported into their terminal program (with search, autosuggestion, and adjacent documentation) seems a pretty killer offering for teams.
> It being auto imported into their terminal program [...] seems a pretty killer offering for teams.
I'm often pretty torn on recommendations like this - to use another tool to account for coworkers unwillingness to use (or, learn to use) the existing/underlying one. It reminds me of a time that I saw someone singing the praises of a GUI for Git because it allowed them to do things you couldn't do from the CLI "like adding only parts of a file" - to which someone replied simply "`git add -p`".
From an outcome-focused perspective, I suppose any introduced tool or process which "gets the job done better" is desirable, if it comes at zero cost. To me, the "lock-in" that everyone _has_ to use Warp in order to benefit from this shared knowledge is a non-zero cost, and requiring software engineers to know how to push code to a Git repo is not an unreasonable expectation. But if everyone's _already_ enthusiastic to use Warp for other reasons, I suppose my objection is moot.
> (with search, autosuggestion, and adjacent documentation)
adjacent documentation feels like a straw-man - man pages or `my-tool --help` exist for standard scripts! Ditto for search - if GitHub's search lets you down, then `grep searchterm /path/to/my/scripts/directory` still works. Autosuggestion is fair, though - although I do know that it's possible to have tool-specific auto-completes (e.g. https://kubernetes.io/docs/reference/kubectl/generated/kubec...), I'll bet Warp makes it easier than a standard shell does.
I have to agree, this to me seems like a great in theory but questionable in practice.
We know how much damage a cli can do, they often don't have the protections in place most other systems. I mean if I copy files with AWS s3 there is zero confirmation that I am not overriding files.
Personally I feel like if you really want to use an LLM to generate your commands, the extra step of copying it from a website is probably a good one. At least you will be forced to actually look at it instead of just assume it is right and hit enter.
The example given in the document is a simple one, but with more complex CLI calls I would be scared to use this for anything but the simplest of things.
That is ignoring the questionable decision to possibly send very sensitive information to ChatGPT to generate these commands.
Most people are pretty comfortable copying and pasting arbitrary commands they find on google and don't understand into the terminal, so I'm not convinced this is any worse.
curl google.com/?search=remove+directory+linux&feeling_lucky=1 | html_strip | head -n 1 | bash
Is pretty dangerous, all things being equal, much more dangerous than copying and pasting and of course everything is more dangerous if you avoid engaging your brain entirely.
It appears from the screenshots that this tool shows you the command it will run, with some explanation of what it does, and the command options used, and then confirms you want to run the command. That is very different than the curl command you suggested is equivalent.
suggest.mode: The mode to use for suggesting commands. Supported values: "clipboard" (copying command to clipboard), "unsafe-execution" (executing in the current shell session) (default: "unsafe-execution")
So default mode seems to be shoot first, ask questions later.
Awesome share! Thank you.
There are definitely similarities, and I love Simon's work.
I guess the extra features are some sophisticated UX (requesting the user to fill out "placeholders" in the response, ability to revise the prompt), the "ask" command and the "search" command.
Will definitely give this a spin.
All of these solutions seemed very heavyweight in my usage. I wanted something that fit within my existing flow and using copilot.vim, EDITOR=nvim, C-x C-e was the solution for me. https://news.ycombinator.com/item?id=40911564
It's very composable and I can do incremental work with it.
Despite being vimian I've found set -o vi hard to work with. Do you like it? Neovim terminal seems better for me since output is selectable in the buffer.