I think the intuition would be that if a determined "core" feature set of something is so easily and trivially reproduced, then it's not the core/source of its value.
Which makes me wonder: Just how much of UI development had software engineering had consumed and automated since 1980s, or is it still in the realm of pure human art?
This is correct. The core value-add of e.g. Slack is not actually "transmit messages between authenticated users." There were dozens of other applications, both commercial and open source, that already did exactly that when Slack came out and killed them all off.
The core value proposition of Slack is next-level beyond that: it adds a visually appealing, highly intuitive UI/UX that almost anyone can sit down and immediately use, without any studying or setup or manuals or training. No technical skills are required. You don't need to understand anything beyond "type username and password"; the rest is self-explanatory.
> it adds a visually appealing, highly intuitive UI/UX that almost anyone can sit down and immediately use
And yet, there were dozens of other applications doing exactly that (Slack is a list of channels on the left, and the text of the channel in the center).
This is a very astute way to look at it. I'd say then that the value of Slack is to be user friendly, which suc definitely is not for muggles.
But then more difficult questions arise:
- Is the price of slack worth it, when the alternative is having more educated users that can do very basic command line calls ?
- Same question, but taking into account that Slack captures your data and won't give it back to you ? How expensive is it to leave Slack ?
Sure, suc sucks, but we probably could converge on something less bloated and proprietary than Slack ?
> But then more difficult questions arise: - Is the price of slack worth it, when the alternative is having more educated users that can do very basic command line calls ?
I'm not sure that it would be difficult to wrap that up in a web interface running on a single server that does nothing but execute those command-line calls - auth, data and everything but session would be managed by the web server.
You could perhaps even make a local GUI application that spawns ssh once and reads+writes to it, and have your 90% slack functionality done in a weekend.
I'm not sure where to post this reply but I have a hot take cooking up for about a week now: Is it too crazy to just leave a clustered SQL DB open to the Internet, static asset on S3, and call it THE backend of a social-* (-networking, -game) app? It's mostly SQL anyway. Maybe a message signing system like IPsec auth header can be set up with a client cert, and TLS packet encryption can be dropped, if it's going to be public data or data is to be advertised as "e2e encrypted". Isn't that going to cut a lot of backend cost?
Is suc scaleable? I didn’t see any discussion of managing conversations across a cluster of ssh-capable hosts and you’re not going to support more than a few thousand with a single ssh server.
This is something I would like to explore. If I had to guess now, I would think that the management of many open handles on a single file by the kernel is going to be more limiting than ssh.
But I'd also wager we can go quite far with a single cheap VPS. A few thousands user does not seem impossible at all.
As a feat, I like this. It's cool to see what can be achieved with composition. That said, I don't care much for the comparison to Slack etc. No engineer wants to be on the hook for a bunch of things cobbled together like this in production. How are you going to hire talent? How do you test the thing? Debug it? Logs? Analytics? This piece replicates (some of) the chat functionality of Slack et al but the chat functionality is not 100% of these applications. They're built in completely different environments, and so to attack the status quo as bloated seems like some kind of warped Stockton Rush esque take on things. Don't get me wrong, there's bloat in software, but "look ma', I built a submarine from bits and pieces - told you those fancy submarines are stupid" doesn't seem like the put-down I think the author would like it to be.
> No engineer wants to be on the hook for a bunch of things cobbled together like this in production. How are you going to hire talent? How do you test the thing? Debug it? Logs? Analytics?
The point was that most of these things become less relevant when the surface area of the codebase gets smaller. Individual utilities are generally easier to understand, test and debug as separate units, logging is already baked into the system (syslog, anyone?), and why would you need analytics?
As a development exercise, this is neat, but its extreme focus on software dev makes it just that: an exercise. Slack incorporates all of the extra stuff suc ignores because it's a service; you're paying for someone else to handle the integrations for access control and authz and such.
Homegrown solutions require varying levels of support. Something like Slack is predictable and that makes it attractive, from a business perspective.
So what do you do when your customers say "this is great, when are you adding screen sharing"? Where do voice calls fit into this architecture, do we just add another UNIX util and stream the bits over SSH?
Furthermore, debugging may be "easier" insofar as the units are discrete (though any good modular architecture will have the same advantage), but what do you do when you find a bug? If you're running a business, you can't be in a place where you're just submitting it upstream and telling your customers that every single bug is waiting for a vendor fix. So now you have to take ownership of maintaining and repairing each and every UNIX utility you use. In what world is that easier than having 500k lines of Go in a consistent company style?
Joel Spolsky's advice is pertinent [0]:
> If it’s a core business function — do it yourself, no matter what.
I'm not talking about self-hosting suc, I'm arguing that all those extra lines of code in Mattermost (and presumably Slack) aren't doing nothing—a lot of them are a necessity if you're selling a software product.
Sure, there is a lot of code that is doing something, but is it doing the right thing correctly and efficiently? It's likely that a > 500k sloc project is filled with low quality and inefficient code. I'm pretty sure this is a natural law. The only way to limit this is by placing quality and efficiency among the highest virtues, which is essentially anti-capitalist.
This was my take as I read it too. It's definitely a great piece of engineering and I really enjoyed reading about it, but comparing it to Slack rather than just IRC feels disingenuous.
That said, I'd be really interested in what tools like Slack would look like if they aggressively pursued simplicity ahead of new features. Capitalism as a whole doesn't seem to encourage this kind of development, so there aren't many examples in the wild of enterprise software that is as simple as it could reasonably be. Nonetheless, it's an interesting thought experiment.
See also ii (https://tools.suckless.org/ii/) doing something similar but on irc directly: use a single standard file for all conversation, auth and access is managed by the irc network directly and there are many UIs already. Like the article, a plugin is anything that reads a file and writes into another.
In case you're passing over this because it sounds like click bait (it KIND OF is), this was actually a good read to me.
It goes over a utility called "suc" (Simple Unix Chat) that implements server functionalities from Slack, Discord, etc. using a very small codebase.
The novel part is it leverages existing unix tools and methodology instead of re-inventing them.
- Auth is handled by SSH.
- Channels are just a file and admin/mod controls are handled by user groups and file permissions.
- Support for rich text, file uploads, etc. aren't a concern as you just write whatever data you want to the channels and let the client interpret it.
- Bots are very easy as you just pipe to/from the channel files.
You're not going to be making a perfect slack clone with the 5 lines that write messages to the channels from the article, but I'm pretty impressed how far you can with really simple unix tools.
Existing chat platforms don't "re-invent" Unix tools - they're simply not suitable for building on top of. There's a reason that approximately 0 large production systems are written in shell script using the "everything is a file" paradigm - because it's not acceptable for any non-trivial system that needs to see real-world use.
Slack to most end users is the client, though. As well as the admin, user management UI, scalability, webhooks, etc, absolutely none of which is handled by "you can chat with other users in Linux if you all log into the box" which every *nix has had since before most readers here were born.
Indeed ! The question is whether all that is worth so many engineer-hours. I'm not advocating for all Slack channels to be replaced by suc, that would be silly. I'm just pointing out how costly it all is. There's got to be a middle ground that does not require hundreds of thousands of lines of code.
"Append-only files" would be a great fit here: to ensure regular users and bots don't modify others' posts. Sadly this Plan 9 functionality is not ported to Linux yet. That's a permission bit, similar to "read" or "write", but even more specific.
I laughed reading your comparison but I don't agree. I clicked waiting for a shallow trick and found that there was indeed a trick, but not a shallow one at all
This ability to bring complex capabilities to a chat system by leveraging so much of its ecosystem is truly amazing. Definitely worth the hn front page.
And you only need basic shell literacy in order to use it, little more to understand it
99% of the issues I've seen for these features have been the OS making it increasingly a pain in the ass to grant an application permission to gain full-screen control or camera/mic access.
I understand the blog post here is about suc, and it is acknowledged immediately and honestly that the headline is slightly misleading, but there's an important message here that ultimately has nothing to do with suc.
Almost all modern software is bloated to hell. The idea that one would need 1.7 million LoC for a rich chat server is absurd. To paraphrase Bill Gates, I don't think I could "spend" that many lines of code on a rich chat server if I tried.
Kudos not for simply calling attention to the problem, not for simply proposing an alternative, but for reminding people that the wheel need not be reinvented - cleverly implementing existing systems & subsystems to utilize their properties (e.g. ssh for authentication and encryption) in a manner that meets the intentions of those properties but not the originally imagined purpose of the system or subsystem is as good an example of the hacker ethos as any, and the software world would benefit immensely to draw from this concept. Lower costs, shorter development timeframes, and less engineering hours wasted rewriting functionally identical code is good for the developer, good for the company doing the developing, good for the end user, and ultimately good for humanity.
I wish to see a lot more of this kind of creative destruction and will try to implement such clever techniques in future projects myself.
FWIW, most of the TypeScript lines of code are in the E2E tests[0] and the webapp dir [1], which, as the name suggests, contains "the client code for the Mattermost web app". So we should really only be counting lines of Go code.
I once was in a meeting about which web authentication framework to use for a site that ran so long because of bikeshedding that I implemented during the meeting and demoed at the end a solution based on the fact that HTTP already provides an authentication facility.
HTTP auth is vastly insufficient for most use cases. You are missing passwords resetting, 2FA, bot/abuse detection, and probably a whole collection of other stuff. I’m also not sure if passwords manager extensions can prefill them. As well as the fact that the login form is only 1% of what an auth library does.
also to make it user friendly. I know we're on HN so we're all tech people but this is super nerdy stuff lol. This would never get mass adoption like tools like Slack have that work across multiple platforms, including mobile
I want to thank you sincerely for capturing the sentiment of the piece. This is exactly what I had in mind and I'm genuinely happy you were able to get that from the text :)
Most current distros prevent you from making setuid bash scripts, on security grounds. But you can get most of the same effect with a specific sudoers entry.
I lament the recent removal of taint mode from Ruby :( as tainting is a powerful mechanism that could go way beyond scripts: imagine Rack / Rails #html_safe but via tainting so it could have worked more thoroughly (e.g concatenate/interpolate/format a tainted user string into a SQL snippet would blow up, preventing a whole class of SQL injections by design)
Some folks used to use taint for CGI/mod_perl to such great effect. It's not a magic bullet by any means (security is hard!) but it's a really effective tool still.
Is there even distros that let you do it? When I wanted to do this myself all things I read led me to believe that the setuid doesn't apply because the file itself isn't running, the thing in your shebang is.
Until I found the wrapper file in the GitLab repo, I was holding out hope as I read through that there was some way.
Or write(2) [1]. I was thinking immediately about ntalk though, because its the last time I used such on the same machine (because IRC wasn't encrypted back then). The nice thing about that, is that you can have it listen locally only, and then one needs a user account on that machine. So its great for say SDF but also these larger clusters and all that. And it doesn't do auth; you let PAM or BSD_Auth do that. It doesn't do sockets or encryption, you let TLS do that. And you could just use SSH to get to a secure shell, using tmux and (n)talk from there. Goes without saying (n)talk isn't 5 lines of bash as its written in C.
I actually run/manage Mattermost Team server (and know/used/administered Unix/Linux for 25+ years), and it's not a fair comparison. BTW: The relatively low RAM usage and speed of Mattermost is underappreciated, IMHO.
We all know "suc" is merely a tongue-in-cheek thought exercise. :) Good luck getting 100's of millions of Twitter/Bluesky/Threads users to even use Mastodon.
Nobody but geeks can use "suc", Mattermost can be used by a far larger proportion of people.
Good recently posted HN-linked article about how most users have a lot less skill than you think:
"The Distribution of Users’ Computer Skills: Worse Than You Think":
https://www.nngroup.com/articles/computer-skill-levels/
> suc does all that by leveraging SSH, UNIX’s access control API, and UNIX’s text-based modularity.
On any fair metric, this should inflate the volume of code metric for any project (whether leveraging those APIs directly or not). It would still favor this implementation on that front, and there are still other merits to the approach besides code volume. But something does irk me about touting supposed minimalism which externalizes almost all of its maximalism. Sure, the dependencies are probably there and sure, it’s good to use the platform. But it’s not a reasonable claim that “five lines” can recreate any portion of any moderately complex software. Otherwise we’d have “SSH in five lines of bash” and so on.
I had a similar thought. But it also struck me that "externalising the maximalism" by using a library like the unix ssh implementation is probably the best way to go about it, since it's a widely used well tested library that implements a complex use case.
In scientific programming I'd say that's the same as using a library like GSL, BLAS or even numpy. The net impact on LoC in my project is minimal, even though it could potentially be calling thousands of lines of code. The point is that from a maintenance perspective I only need to maintain 5 or 10 lines, and if I find a bug in there I can file a bug report upstream, rather than maintain the complex details of the implementation.
The title is being a bit smart-ass for clicks and the author admits as much right at the top. But beyond that it's pretty great that they implementated a fairly basic version of slack with standard unix tools in a straightforward way.
> But it also struck me that "externalising the maximalism" by using a library like the unix ssh implementation
Not saying this to be argumentative, only to emphasize the same conflicting dynamic I saw in the post: this is exactly the same rationale that people routinely lambast here about NPM and other sources of dependencies. It’s libraries and frameworks all the way down. I’m cool with that, I’m just not cool with picking and choosing when it’s cool without any particular principle.
It's interesting to think about where such a principle would land.
I think having five lines of code (well, it's more like 50 of so reading the article) with some key and reliable dependencies is better for maintainability than having no dependencies but a substantially larger code base. As you point out, there's definitely limits, and npm's left pad, is odd and is even packages are obvious example where the added dependency is less maintainable than implementing the code directly.
i think one important difference between externalizing complexity to unix tools like ssh, and externalizing complexity to npm libraries, is related to (for lack of a better term) quality control
any dingbat with a terminal can produce an npm library that you can use in your application, the level of quality control is basically zero
but it takes a pretty strong track record to get your software into coreutils, or really any base linux distribution
to put it kind of cynically, i think there is an enormous difference between relying on ssh vs. relying on leftpad, gatekeeping based on competence measured over time is i think actually important and good to do
I think that the main difference is that unix tools are intended to work which each other. Therefore needing less line of code. As opposed on other systems.
That was exactly part of the point I was trying to make with the article. The other, that I left unsaid but should probably add explicitly, is that Slack et al. run on a server with a kernel, but choose to ignore the access control capabilities of said kernel and instead chose to reimplement them. I think it is a shame, and more software should strive to be security agnostic instead of reimplementing access control for the umpteenth time.
In general, composing Unix commands is a very powerful means to construct complex applications... poorly, with no real pathways to fixing their shortcomings. I believe the traditional "Unix Way" is to gloss over said shortcomings and pretend they don't exist, and when that fails, move the goalposts and argue that they don't exist "in the real world".
For example, what combination of shell commands can I use to output the number of files in a directory?
GNU find has a -printf which would probably avoid the need to use tr. Personally, two hardlinks to the same underlying storage counts as two files to me, but I understand why you might disagree. I'd bet there's something out there that could give you only the count of unique storage areas, but I don't know how that interacts with deduplicating filesystems.
I had originally written "Hint: it probably isn't what you think it is, if it's even possible", but after testing for a bit, I updated my comment in what was apparently a fit of hubris to say you couldn't.
|For example, what combination of shell commands can I use to output the number of files in a directory? Hint: it probably isn't what you think it is, if it's even possible.
the inode is the important thing when all is said and done. It is flexible in that it can contain all the metadata needed to present a file to a process. Sometimes that metadata is a list of blocks in the filesystem. sometimes it points to another inode.
I think of it like an old-timey 'card catalog'. You have a bunch of tiny drawers filled with cards. Some of the cards are big and blank spacers with a prominent tab sticking above the normal top edges (Directory). Sometimes you have a card that points to another card elsewhere in the catalog (link). Sometimes you find the details of a specific book on a specific shelf (block data).
Point is, they are all cards. The comment essentially asked for a command to say 'how many cards between these two spacers'. It's a "trick" question as old as usenet to spring the distinction between link inodes and list of blocks inodes and say "Ah-HAH!! gotcha", but in reality it's a silly game of jumping levels, misdirecting semantics and prey upon the distribution of understanding in a forum for personal glory.
The inode is the item, it is the card that is being counted, no matter what is printed on it. imho.
Mastering UNIX allows you to pull tools out of a toolbelt and adapt to the situations as the arise. No one tool should solve every job. You can happily ignore problems that aren’t relevant to the issue at hand. Two servers need to talk to each other behind a strong firewall in a private network, sure, skip complex authentication setups. Knowing how to build in layers is good engineering practice.
Whereas large scale enterprise commercial solutions make money by selling you on complete solutions that force you to relearn everything their way from the ground up. Just look at how AWS has hijacked so many concepts from the modern web and trapped people into building “cloud-agnostic” wrappers to try and wrangle the mess. Still we’re mapping onto their redesigns of the same old stuff.
At some level it’s unavoidable, hardware vendors need to agree on instructions after all, but when looking at the situation from a high level, it’s best to keep our ideals in mind and steer the ship so-as to find ourselves in paradise not lost in a sea of pirates.
One ideal: Modularity. Promotes healthy competition.
Another ideal: Simplicity is more easily achieved when modular components may be used (and tested) in isolation.
Chapter 8 of The Linux Programming Interface mentions that applications running on Linux have basically 2 options for authentication:
* Roll it themselves, maintain the database and all that jazz
* Delegate it to the (very robust, very mature) Linux user authentication stuff
Ever since reading that I've found myself wondering why more apps don't simply use SSH keypairs for authentication, given that they're already such a battletested mechanism. I get the whole "no MFA!" argument, but still.
If we really wanted MFA, we could roll a PAM module, and whatever pushes SSH authorized keys could also push MFA seeds. But IMO this would protect against very unusual attacks and annoy ssh agents users everywhere.
UNIX shell is good like that, that you can combine thing in shell scripts and command-line you can make such a combined program, if you are using programs that are designed to support that (unfortunately, too many modern programs don't, although some do).
However, is there a race condition with writing to the files? Since it is append mode, I would expect that would prevent other processes overwriting the file, although would it prevent other processes writing in the middle of a line if it gets interrupted? Actually, I don't know.
I had made a simple two user chat system with logging using ts, tee, and nc; using a shell script with only a single line of code. This produces two log files, one for sending and one for receiving; however, I can then use cat and sort to interleave the logs into a single log file.
It’s cute and fun if you’re the kind of person that enjoys terminal UI’s and can keep an encyclopedia worth of CLI tools and options in your head.
I don’t enjoy Slack and the like due to the notification overload and FOMO burnout. The UI is a memory hog for what it does.
But for the computer user that wasn’t born at Bell Labs with a keyboard in their hands, it’s pretty decent. If you can navigate gmail or a word doc you can probably figure out how to use Slack.
Of the three (Slack, Teams, Discord) I actually prefer Discord. Its the easiest to use, the fastest, etc. Too bad that any corporation of size uses Teams or Slack.
I think part of the point the author was making is that rendering and interacting with the server can vary depending on the client you build. You could have a CLI client or a React client and build them with two completely different goals in mind. For example, monitoring and botting channels vs. end-user experience.
Very cool! I love seeing things like this put together using just basic unix tooling. I dont know if I'd call it a slack clone though, but cool nonetheless.
Too many software project just ignore the underlying abstractions of the OS they run on. I feel weird with all this virtualization stuff, like we're missing something obvious.
Rob Pike said in 2000 that systems research is irrelevant http://doc.cat-v.org/bell_labs/utah2000/ and I'd like to think that if we went back to simple tools like this we could make it somewhat relevant again.
The aforementioned 50 years of OS development have given us facilities through which you can very tightly control what access you give to ssh clients, in a tried and tested way.
The authors are right when they say that sshd and unix users are probably the most frature-full auth scheme in existence.
While we are at bloated things reimplemented in a few lines of shell: https://github.com/mlang/openai-kiss is my CLI client for ChatGPT. No Python, just curl and jq.
Unfortunately I don't see any real way to avoid suid here, otherwise neat experiment. Posix ACLs might help, but the crucial part is prefixing the username to the message which requires privilege
Lots of unix facilities are criminally underutilized in modern systems
Run a daemon with the right userid to do the writing, and have it make a pipe device for each user to write lines into.
I don't know how much effort is meant to be put into securing this from impersonation, but info can be pulled out of /proc/ if the permissions are set up right.
Unix default security mechanisms are not enough, otherwise tools like KVM/Firecracker and gVisor won't be the standard for all cloud hosting. Unix just wasn't built to sandbox truly hostile code.
I’ve always been really intrigued by the idea of building things as simply as possible and leveraging as many existing tools as I can to do so. My problem is that most of my projects require a web frontend which would mean calling bash scripts from Go (or whatever language I implement the server in). This just FEELS wrong, but I can’t really articulate why. Am I wrong to feel that way or is there some good reason I’m not seeing to not wire up bash scripts like this?
Unless you are writing very tight code (like for a real-time system, or something you know will be invoked millions or billions of times) then shipping > code golf. Bash away!
If your server is served by apache or ngnix, you don't have to call into PHP/other languages in order to return bash commands. You can define a route in which pages accessed are fed to a bash script and stdout gets piped back to the client.
Do they require a web frontend, or do you just want one? Because you could argue a chat program requires a web frontend, but suc doesn't use one regardless.
bash is concise but there are other languages nearly as concise. It'd make more sense to use them rather than go+bash, once you cross the rubicon of not relying entirely on stuff that comes with UNIX out of the box.
what do i do after i sign up to the-dam.org ?
i did not get any email after the payment and ssh-ing doesn't work either.
(i just get a `Permission denied (publickey)`)
Do you count the lines that Slack needs to get the text to appear on your screen? Or to send the network packets? Do you count the lines involved in all the routers on the way, too?
This is an incredibly cool Showcase of the power of Unix command line utilities and their composability. However, I have to say, as someone who recently got into amateur system administration for a server that hosts several services I and my friends use m, I would not want to be responsible for maintaining this!
Yeah in the tests I ran while designing it, it became apparent that keeping everything in order was kind of a pain. However, with Guix it actually becomes quite easy. There is not "state" in the system: the system is simply a function of your system declaration (the /etc/config.scm file) that defines the whole OS. No need to remember what to chmod chown, etc.
one nicety that Slack etc. can't match is that this works on a local network even if internet access is lost (and doesn't slow down with terrible network latency). Going to install this on our server on the Greenland ice shelf...
$ rain | wall #.. my favourite thing to do to my colleagues during the end of coding marathons we'd find ourselves in, during the 80's ... on machines that weren't always fortified against the rain .. ;){
Just a few years ago in university the lab computers were running debian 10 and the sysadmin would make announcements about linux-specific infra outages through a script that ssh'd into every machine and wall'd the outage message.
We would often get people using lab computers remotely and you could talk or wall them to tell them you were rebooting soon. Good times.
I'm seriously considering trying something like this just to see if it works. TCP would kill it though (maybe?). But maybe UDP plus some clever ffmpeg invocation...
If you went with the heavier-weight use of Asterisk, you could have softphones authenticated to Asterisk, but, no inbound calls from the softphones would be accepted. Only if 1 user sought to call another would Asterisk "dial" each softphone; which might be sufficient in terms of security (you would have to have a working SSH account to be called).
I think the main issue would be the buffering existing tools are likely to assume. You don't need more than a few hundred ms of latency for a phone call to start feeling really weird.
That's the thing with social networking. There's a few core problems and it's been solved many times. Such as in 1973 at Community Memory in Berkeley: https://en.wikipedia.org/wiki/Community_Memory
Or through newspaper classifieds ads, French salons, compuserve, bars, aol, cb radio, friendster, phone phreaking party lines, dialup bbses, icq, irc, myspace, Facebook, netnews, whois/finger/talk/uucp. If you read Carolyn Marvin's book, "When old Technologies Were New" (1988) you'll see her documentation of this in its first digital form - via lonely telegraph operators chatting with each other in the 19th century.
can someone post the following to a channel and delete everybody's home directory that reads it with usuc?
: rm -rf ~/
[edit]
They cannot. On careful rereading usuc passes the data through the pipe when writing a message, not reading it. So the channel is just full of raw terminal escape codes, if I understand correctly.
In what way is its functionality "nearly the same as Slack"? Or are people these days just using Slack as a generic term for "channel-based chat" without acknowledging the long history of chat apps? (In which case I would argue it is much closer to the functionality of IRC, if still significantly short of it).
I was pitching a related concept to a young investor lately, and she had not heard of IRC. If you stick around long enough, all that is old is new again. If you can profit from this, it's great. If you can't, it is a source of frustration.
I agree that Slack is much more featureful, but I fail to see what critical piece of IRC functionality is missing ? If anything, you don't need a bouncer for chat history with suc, so it's more featureful than IRC.
With that said, I seldom used IRC so this is a genuine question for people who used it and miss some features.
It's been years since I used/administered IRC, but I also think there are common misconceptions about what IRC actually is. At it's core, IRC is an incredibly basic tool, and most of the end-user features come from layering additional services on top of IRC, both at the client and server levels. These services can be pretty complex and cumbersome to manage, and in my opinion, never quite seem to solve the problem exactly. Persistent chat history is certainly the most glaring example.
At the risk of becoming a meme, it's similar to how practically nobody uses Linux alone; it's almost universally accompanied by something like GNU.
OK, for starter the while loop. Main rule of writing shell scripts is, Use The Shell, Luke. Don't start external programs, if your shell (which is already running) can do it. "while /usr/bin/true" is nonsense, every time an external program gets executed for nothing. There are plenty of bash (since this shell is used in the example) internals, which evaluate to true, like ":", "test 1", "(( 1 ))", maybe others too. So "while :" should be better. But why even use dummy true statement, since the loop terminates with read? You can put the read as argument to while directly: "while read -r line; do" ... Same with echo and date. Just use one single printf builtin.
while read -r line ; do printf '%(%FT%T%z)T %-9s %s' -1 "$(/usr/bin/id --user --name --real)" "$line" ; done
Thanks for the pointer about the useless use of true.
Using builtins may be a security risk as they can be overloaded (hence the use of full paths everywhere).
I did not know about bash's extension of printf to print a date. I need to use the builtin to use it though. But I've received good advice elsewhere on how to do that securely, so I'll do it and do a write-up because it's not information that's easy to come by.
This is very reminiscent of the initial hn response to DropBox [0]. This is like saying that you can build a Tesla clone by attaching 3 pieces of wood together along with 4 wheels.
Although it is technically cool, it misses practically all of what makes Slack so popular - UX.
>it misses practically all of what makes Slack so popular - UX
I strongly disagree. Slack has nearly the same UX as every chat platform of the past 2-3 decades. It's a slightly change over AOL Instant Messenger. There's also a large amount of very similar software or straight-up slack clones that are not very popular at all.
I would say what makes Slack so popular is how easy it is to set up and get running, both as a service and for every user, combined with the availability of easy integrations/plugins.
> I would say what makes Slack so popular is how easy it is to set up and get running, both as a service and for every user, combined with the availability of easy integrations/plugins
What you are describing is quite literally the UX, the user experience, and Slack overall has quite a good one. Although several of the newer features are somewhat lacking in that regard, like the threads feature.
there are a bunch of features in slack beyond the core chat stuff, like:
1. being connected to multiple communities and switching between them instantly
this can be of course simply replaced by connecting to different servers in a tabbed terminal and use the terminal's built-in cmd-1/2/... shortcut, which happens to be the same as in slack.
2. meta data about others, like their timezone or how to pronounce their name is quite important for distributed team work
im actually not sure how reliable is this even in slack, but in general, it can be useful, but im not sure how to solve it elegantly, when the chat runs remotely...
maybe we should just spawn a loop at the background, which gathers idleness status from the OS and uploads it when it changes, into world readable files and the remote clients can just check those file whenever they want.
4. extra status indication with automatic expiry, eg when someone is away from the keyboard, coz they are having lunch
we do use this feature often and it's a really helpful regarding when can we expect a response from someone.
again, quite simple to model this as a plain text file and we can even use emojis, to have a very similar effect to setting " lunch" on slack. ppl would need to know what's the emoji selector shortcut though... like cmd-ctrl-space on macos.
5. text search across all channels/rooms
assuming the chat is being logged into files, then a recursive (rip)grep could work to some extent, but then from the search results one might want to get back to the context of the result too.
6. threads
this complicates implementation a lot more, but we found it an obvious improvement over the single threaded IRC model of communication
7. having threads open on the side, so ppl can track 2 streams of comms at once at least
it would require starting the chat app multiple times and do some window management to see them side-by-side
now obviously all this can be done a lot simpler, but those implementations typically always lack somehow. not sure why is that...
Points 3 & 4 could be combined: like .plan, each user has an o+r .status file. Its contents are a user's status message, but its modified time is used specially to indicate when that user was last active. Anytime the usuc client writes a message to a channel, it would simply touch the .status file. And whenever an expanded usuc or some other tool lists the users in a channel (that is, all users in the group that owns the channel file), it would mark as idle any who had a .status mtime older than 30 minutes, say.
http://catb.org/~esr/writings/unix-koans/ten-thousand.html
Master Foo once said to a visiting programmer: “There is more Unix-nature in one line of shell script than there is in ten thousand lines of C.”