I didn't care for the feature (I have no issues with AI/LLMs but it just wasn't useful IMHO) but the backlash was ridiculous and embarrassing for everyone complaining about an opt-in feature. The comments on the GitLab ticket and here on HN were examples of some of the worst people (or people at their worst) in our industry.
The "opt-in" you're speaking of was leaving the API key textbox blank.
My understanding is that this being filled in with anything is enough to have iTerm potentially send commands to OpenAI. There are also many companies that reject any applications that integrate with 3p AI services, which iTerm would have been caught up with.
Once your labor of love is used by hundreds of thousands/millions of people, then it's not unreasonable to receive complaints when the maintainer makes an unpopular decision. Not everyone has time to fork and maintain their own copy.
Entering an API key doesn't immediately cause iTerm to start send data to OpenAI. That's a straight up lie started by people who actually tested the feature and posted their findings on the GitLab thread [1][2] about it.
OpenAI integration is activated after you write a question to be sent to the AI [3]. It takes many clicking and typing to get to that point and can't possibly happen by accident.
It's one thing to not like a feature. But to behave in the way people behaved is not merely ridiculous, it's wrong and horrifying. People came to the issue tracker from Mastodon looking to start a fight that was uncalled for, threw insults and lies, and fantasized about physically harming the dev [4]. When all they could've done to avoid the feature was to simply do nothing.
Yeah, iTerm is a labour of love for the author, George Nachman, and people were acting like the ragebait-hateporn-parasocial crowd, swarming onto anything that promises a hearty 2-minute hate.
I have no idea how to solve that problem though, it's just part of our zeitgeist and I hope people get bored with it sooner rather than later.
It's useful to frame the event from the perspective of the users who were angry with the inclusion of the feature. AI is *everywhere* right now and the fatigue is getting to people. Many products are adding AI features that are genuinely useless and are only there for marketing purposes. Many products are putting their AI features front and center at the expense of their core competency etc.
People who responded with anger ("ragebait-hateporn") at this feel pressured on all sides by this AI hype cycle (not saying all AI features are hype, just that we are in a hype cycle where many are useless). It's getting frustrating and tired. For many it feels like the crypto hype cycle again.
It's easy to understand how "great, now even my terminal is putting AI features in?!" is the response, especially with a tool as beloved as iTerm, and as "close to the machine" as a terminal is. It's one thing for Notion to put AI things in, it's another for the interface where I regularly type sudo to do massive operations across my machine.
The correct response to false allegations followed by insults and threats is anything but to admit it. The software in question is a popular free and open source software that has more than a decade of trust. The AI feature fundamentally requires the user to actively engage with in order to use it [1], with no nagging or coercion whatsoever. In fact, the only people reminding us of its existence are the Mastodon mobs, not iTerm.
The feature wasn't added out of pure hype either. It was likely inspired by user feedback [2], and the dev ultimately added it because it was useful for him personally [3].
Despite all of this, people are raging about unprovable nefarious motives and making claims about spyware, as if it's Windows we're talking about. They pretend as if it's maintained by some faceless for-profit entity trying to screw us over instead of a sole developer trying to create good software in his spare time. Some are even openly fantasizing about inflicting physical violence [4].
This kind of behavior should be condemned, not praised.
There is a bigger story here - who uses iTerm2? It's gotta be like 95% developers, right?
If your most technically capable users, who are usually the MOST excited about new tech, are the people who are ready to burn down the world to stop you from adding AI features, what do you think normal people think?
I feel that, without several very major breakthroughs in the next few years, we're more likely to hit another AI winter in the 2030s than achieving AGI.
It’s abundantly clear from participating on the issue tracker that these weren’t “real developers” commenting. They were hateful people. The result of all this BS is a less secure implementation (separate plugin with weak IPC guarantees—I know first hand as I was one of the people trying to make sure that this new plugin direction had at least a modicum of security, despite disagreeing with the direction). No actual security or compliance people were outraged, because all their tooling is sufficient to properly manage iterm2 in regulated environments etc. etc. and no part of any security/IT professional’s or compliance officer’s security model depends on how many different binaries an application is split into. It was all hyperbolic misguided rage to the point of being harmful to the actual security posture of the app nonsense at its worst.
The most common objections to AI that come from the dev community are: privacy; IP rights involved in training data (and how it fucks over individuals)
The general public almost exclusively judges the features that are handed to them in products based on their utility and not the above concerns, so they will always be less critical overall.
It’s not insufferable if it’s silent. In the case of iTerm, the reaction of the vocal minority was atrocious, and completely entitled towards an amazing open source maintainer
Well that's the gamble with OpenAI isn't it? There's some utility now but it's currently unclear when we'll hit a hard wall on what's realistically capable with the models (and their direct successors), and how many things OpenAI will be able to with them.
I don't think anyone knows for sure where that wall will actually be (including OpenAI themselves). While I don't actually think we'll get AGI any time soon from any company[1], if they did manage to fully crack it, then there's really no limit to how much they can actually do and how much money they can actually make.
I certainly don't think we've seen the end of the capabilities though, even in the near term; I think the GPT models have a lot of room to improve and I think that newer models for generative images and video and music and 3D models are going to get substantially better. How "profitable" that will be will depend on a bunch of variables (e.g. costs of GPUs/TPUs/something else, energy costs, potential regulatory hurdles, etc.).
[1] I don't really know what I'm talking about, I'm not a machine learning or AI expert, it's just a gut feeling I have.
People clamor for open source software and then shit on the people who make it because they provided a feature they didn’t want but didn’t have to use.
I never understood the appeal of iTerm2. The built-in terminal app can do basically all the same things except tiling, but there’s always tmux if you need that.
- Visual bell, can show me when something has finished running
- Great incremental search, with regex supported
- Instant Replay, to allow time travel through anything erased from the terminal (like TUI screen repaints)
- Global search through all open terminals
- Triggers can highlight particular text when outputted on any terminal when a regex is matched
If using the Toolbelt feature :
- Keep track of your paste history
- Allows the use of text snippets to save typing long commands
on remote hosts where you can't define an alias
- Show running background jobs and send them signals
If using shell integration:
- Directory name picker based on frecency, which is useful for adding directory names when composing long commands or to jump to a directory when using Zsh (which lets you omit the 'cd' command).
- Hotkeys to jump back to the points of the commands entered, to avoid scrolling back through pages of console output to find the beginning of a command
+ Can open windows and tabs in a preset arrangement
+ Can choose from multiple arrangements across multiple displays
+ Tells me if there's a newline in what I'm about to paste
+ Exports its config to a JSON file in a place I choose
+ I can remove the tab close buttons to prevent a misclick (I'm clumsy)
+ I can change tab colours from a shell script
+ Multiple profiles
+ Text spacing both horizontal and vertical can be tweaked
+ I can log everything displayed and typed in my terminals, for later searching
+ I can disable cmd-clicking on URLs
Terminal.app is a solid piece of software, but it hasn't been updated in an age. It only supports 256 colors, where most terminals support truecolor. It's also a lot slower than other terminal software, as it doesn't do gpu rendering. And there are a number of extensions which are common in modern terminals, but are not supported in Terminal.app.
Granted, not everyone will notice or miss such features in their terminal.
You can also define custom clickable things via the "Smart Selections" feature. For instance I have all strings that match `\B#([0-9]{2,6})\b` open the page of that GH issue in my companies central issue tracker when they are cmd-clicked, and I have all strings that match `\bU\+([0-9a-fA-F]{2,6})\b` open up a details page for that unicode code point.
Terminal.app supports clicking URLs (cmd+double click). But what I want is to cmd+click (relative) file paths, with no URL scheme prefix. iTerm 2 supports this, and I miss it when I use Terminal.app.
I moved away from the Terminal because of lack of utf8 support, also I "needed" 24 bit color support - I was working on some silly 24b gradient progress bars at the time.
I've been a happy Kitty user for some years now. It requires a specific termcap, so foreign servers are a royal PITA, otherwise it is pretty nice.
One advantage is that you can define arbitrary margins for your terminal. Thus, you can make your terminal full-screen and write in vim with nice margins, instead of super-wide ones like in the default. Super-wide margins for text are ugly.
`tmux -CC` integrated to the native tab splits and the buffer. That's the killer feature for me. No other terminal emulator has it, not on Mac, not on Linux.
So, what I see here, is yet another Open Source creator/maintainer being crowd-bullied into compliance with the (possibly correct, but never-mind) consensus on new features.
It's my personal opinion, not backed by any scientific studies or whatever, that this kind of behavior directly leads to developer burnout, and is, put plainly, toxic and counterproductive.
I still think the backlash was ridiculous and that a lot of people have a lot to apologize for towards the maintainer of this project, George Nachman.
I'm surprised that the maintainer actually moved this code out. I would have told people to pound sand, personally, given the absolute vitriol sent his way.
Remember the human on the other side of messages you are sending. This thread is nowhere near as bad as the previous but come on, people.
> I would have told people to pound sand, personally, given the absolute vitriol sent his way.
Sounds like a great way to get even more vitriol, and continue to get it for years afterward. Every time you released a new version, there it’d resurface again. Sure, you could block some people on GitLab, but then you’d see it on social media or what have you.
Or you could just move it out to a plugin quickly (which is what happened, the beta for this version was out in a couple of days) and curtail the backlash.
Being angry at the people angry about you is not a practical solution if your goal is piece of mind.
Which is all fine and dandy as long as you never have to interact with those people, but in this case the author would absolutely have to do that. If not directly in their issue tracker (though certainly there), then in every conversation about their project from then on. That would be quite tiring for most people and lead to burnout.
> Being angry at the people angry about you is not a practical solution
If you don’t think so, I’d be interested in understanding what you see as the principal difference between those statements. To clarify, I was also referring to (internet) strangers.
In sum, why even bother in this case? Just rip it out to a plugin and never think about it again. Now that is pragmatic.
"Security theatre" means doing complicated things which are claimed to increase security but don't do that.
The classic example is making everyone at US airports take off their shoes, have their shoes x-rayed, and then putting them back on -- for decades after the single incident in which a man failed to blow up a plane with explosives in his shoes. https://en.wikipedia.org/wiki/Richard_Reid
Moving code out into an optional plugin -- so that it can be verified not to be in use or even removed -- is not theatre.
it was never opt out! Even the previous version had AI as opt-in, you would have to turn it on and provide an API key. It was NOT on by default. Now even the code path is moved to an external component.
Ugh, I am pretty sick of people trying to shoehorn AI into everything. AI is cool, don't get me wrong, but it feels like every company and organization is trying to hop onto the bandwagon and add it to places where I really don't want it. I can understand how a terminal might be able to benefit from AI (in the same way coding does with Copilot), but I don't know if I really want it in my terminal; if nothing else I don't want my terminal sending lots of network requests behind the scenes.
Maybe I'm in the minority but I dunno, I feel like I can't be the only person sick of adding AI to every damn thing. When I was applying to jobs last year, I ended up avoiding anything directly mentioning AI in the job description, because about 80% of the time it felt like it was some founder straight out of school who knew nothing about CS but knew just enough about how to proxy to ChatGPT that they were able to get a few million dollars of VC money, and they were extremely annoying to talk to. I guess having a bunch of people tell you that you're a visionary must give most people an ego, and I'm way too cynical to put up with that.
I think iTerm moving this to a plugin is a pretty good idea. If people want to add AI crap into everything, don't let me stop them, but I don't really want it added to the default stack of iTerm.
It's fair to say you're sick of it. I never even used the feature myself, but here's the thing: I never even saw it and wouldn't have known it existed if not for the backlash. I had to search through settings to find any evidence of it.
You say it's "shoehorning" on one hand, then "I can understand how a terminal might be able to benefit from AI" on another.
It didn't send anything off in the background (you don't need an AI feature to do that, BTW).
The backlash was completely unwarranted here, and I say that as someone who wouldn't bother using that feature at all.
I see how what I said might sound contradictory, so let me explain.
In the case of startups using AI, most of them have some marginal benefit from the AI usage, if a bit overblown. By "shoehorning", what I was saying is that there isn't sufficient justification for a lot of the AI usage; it makes stuff 1% easier (maybe), while making the product more expensive (only being subsidized by VC money or just operating at a loss so it doesn't immediately look like it is in the short term). I didn't say "shoehorning" to imply that there's zero utility in it, just that there's not really enough to justify a lot of it.
> It didn't send anything off in the background (you don't need an AI feature to do that, BTW).
Fair enough, I didn't look into the implementation details, and obviously you can have stuff sent in the background without AI (e.g. usage statics). Most AI stuff I've seen has been just proxied to OpenAI (or some competitor/affiliate), so I guess I just assumed that that's what iTerm was doing.
ETA:
Looking into this a bit further, I completely agree that the vicious comments that have been spreading in regards to this are completely ridiculous and mean-spirited. I'm sick of AI as much of the next guy but people are being pretty dickish about something that fundamentally they're getting for free.
I get that AI is overhyped, but calling it “trash” seems silly and hyperbolic. LLMs can do some pretty neat things if you understand their limitations.
Fair enough, but in my experience when you run into anything outside the happy path it will become an exercise in extreme torment.
Maybe it’s the language I was using to debug an issue (typescript and having click/drag events override each other), but chatgpt 4o (along with the previous versions) would get into this circular path of offering suggestions (would cycle the same three solutions).
It was also interesting to note that when this circular debug path started it would start to remove random properties or objects or types from the code. It wouldn’t be noticeable reading it but if you have any modern editor you’ll see the issues with your lsp/linter. The removals had nothing to do with solutions proposed.
This issue seemed to always happen to me, I think I have to relegate these LLMs to act like advanced scaffolding tools where I can include detailed instructions for basic capabilities as well (rather than writing them saving a decent amount of hours all things considered).
I don’t know if other models are actually good at debugging (going to guess no because they don’t seem to actually understand context of the problem, just the relations of keywords when suggesting solutions).
I agree with the other poster, maybe trash is a harsh word but it is extremely bad for debugging anything advanced it seems.
It's a bit weird; I've definitely observed the circular logic when using it for debugging, but occasionally when I've seen that and called it out for that saying "You've already suggested A, B, and C, there's no value in suggesting them again", it actually will come up with a unique thing that actually solves the problem. I guess by eliminating the most common problems it has to start looking for more obscure stuff?
I've become a bit disillusioned by the idea of it being any good for direct code generation in its current iterations; nearly everything its generated has required pretty substantial fixes on my end, to a point where I'm not sure it's actually saving me time. The thing I mostly use ChatGPT for now is for parsing and digesting server logs.
> I think I have to relegate these LLMs to act like advanced scaffolding tools
This is a reasonable take. When it was first released it was odd watching the entire internet treat chatGPT as some kind of oracle
It’s really amazing at generating content, but it doesn’t actually think or know anything, despite how well it can keep up its side of a conversation . If you have domain expertise in what it is generating, the limitations are very clear.
My concern is for the next generation of students coming up with pervasive AIs. Will they learn how to write critically if they rely on an LLM? Will LLM quality go to shit because it’s just being trained on an internet full of dodgy LLM output?
I disagree. I believe that AI is fundamentally destructive to society because it concentrates wealth into the hands of tech companies, removes jobs at a faster rate than previous automations, encourages human isolation by making people less reliant on each other, and produces fake "art" that floods the market and devalues human expression. I believe it is the prime example if evil incarnate.
The prime example of evil incarnate? Not, say, death camps and genocide?
I think it's generally an interesting technology with a lot of uncontroversially beneficial applications - things like voice transcription, drug discovery/response prediction, defect detection, language translation, tumor segmentation, weather forecasting/early warning systems, EEG decoding, malware detection, or OCR. No longer having to memorise FFMPEG commands also doesn't seem that evil to me.
Automation of tasks is something we all already benefit from constantly, like to keep food fresh without having someone collect ice from mountains, but I do agree with the concern that under capitalism it tends to lead to concentration of wealth. I think the productive path is along the lines of UBI or broader economic changes, allowing everyone to capture the utility, not through rejecting the technology itself and definitely not a hatemob against an open source developer from deciding to take a handy opt-in tool as a stand-in for evil itself.
What is evil is that AI reinforces technological development which in turn is already responsible for genocide against non-human animals, which in my opinion is on the same level as human genocide.
The fact that we're all okay with software that frequently gets things very very very VERY wrong is extremely worrisome. AI in its current state is a high-speed misinformation machine.
It seems as though the maintainer moved it into an external plugin because they were told, loud and long, that “shoving AI [there] [was] totally the wrong idea.”
It’s certainly an example of how public backlash can change the discourse, but it is not an example of why it was wrong to put AI there.