Hacker News new | past | comments | ask | show | jobs | submit | chubot's comments login

(author here)

I've had a VPS with Ubuntu/Debian since 2008, so like 16 years now ... I still use it, but not for anything I actually want to be up all the time

Over time I found the "set and forget" nature of shared hosting a lot more reliable

That might just be me, but I don't think so. I've noticed A TON of link rot due to "I set up a VPS, and wrote one or two good blog posts last year. But now I am doing other things, and the VPS rotted". I think it's mainly monitoring, and putting other stuff on the VPS that's unrelated to the blog

For a blog, I'd use Github pages before a VPS. But as mentioned, that only handles static sites, and scripting is useful (even for blogs)


It is interesting how these companies shift with the political winds

Just like Meta announced some changes around the time of inauguration, I'm sure Google management has noticed the AI announcements, and they don't want to be perceived in a certain way by the current administration

I think the truth is more in the middle (there is tons of disagreement within the company), but they naturally care about how they are perceived by those in power


I think in theory it's a good thing that companies shift with the political winds.

Companies technically have disproportionate power.

It's better that they shift according to the will of the people.

The alternative, that companies act according to their own will, could be much worse.


I would say it's natural. Their one and only incentive isn't as they are trying to tell you "make a word better place" or similiar awkward corpo charade but to make a profit. That's a purpose why companies are created and they are always following it.

Sure, but I'd also say that the employee base has a line that is different than the government's, and that does matter for making profit. Creative and independent employees generally produce more than ones who are just following what the boss says

Actually, this reminds me of when Paul Graham came to Google, around 2005. Before that, I had read an essay or two, and thought he was kind of a blowhard.

But I actually thought he was a great speaker in person, and that lecture changed my opinion. He was talking about "Don't Be Evil", and he also said something very charming about how "Don't Be Evil" is conditional upon having the luxury to live up to that, which is true.

That applies to both companies and people:

- If Google wasn't a money-printing machine in 2005, then "don't be evil" would have been less appealing. And now in 2020, 2021, .... 2025, we can see that Google clearly thinks about its quarterly earning in a way that it didn't in 2005, so "don't be evil" is too constraining, and was discarded.

- For individuals, we may not pay much attention to "don't be evil" early in our careers. But it is more appealing when you're more established, and have had a couple decades to reflect on what you did with your time!


I see it as the natural extension of the Chomsky "manufacturing consent" propaganda model. The people in key positions of power and authority know who their masters are, and everyone below them falls into line.

That's an interesting point, and something I thought of when reading the parser combinator vs. recursive descent point

Around 2014, I did some experiments with OCaml, and liked it very much

Then I went to do lexing and parsing in OCaml, and my experience was that Python/C++ are actually better for that.

Lexing and parsing are inherently stateful, it's natural to express those algorithms imperatively. I never found parser combinators compelling, and I don't think there are many big / "real" language implementations that uses them, if any. They are probably OK for small languages and DSLs

I use regular expressions as much as possible, so it's more declarative/functional. But you still need imperative logic around them IME [1], even in the lexer, and also in the parser.

---

So yeah I think that functional languages ARE good for writing or at least prototyping compilers -- there are a lots of examples I've seen, and sometimes I'm jealous of the expressiveness

But as far as writing lexers and parsers, they don't seem like an improvement, and are probably a little worse

[1] e.g. lexer modes - https://www.oilshell.org/blog/2017/12/17.html


OCaml allows mutation via reference cells.

I know that, and obviously you can write a recursive descent parser in OCaml

But I'm saying there's nothing better about it than doing it in OCaml vs. C++ or Python -- it's the same or a little worse

IMW it's natural to express the interface to a lexer and parser as classes -- e.g. you peek(), eat(), lookahead(), etc.

Classes being things that control mutation

But objects in OCaml seem to be a little separate dialect: https://dev.realworldocaml.org/objects.html

When I debug a parser, I just printf the state too, and that is a little more awkward in OCaml as well. You can certainly argue it's not worse, but I have never seen anyone argue it's better.

---

Culturally, I see a lot of discussions like this, which don't really seem focused on helping people finish their parsers:

https://discuss.ocaml.org/t/why-a-handwritten-parser/7282/7

https://discuss.ocaml.org/t/good-example-of-handwritten-lexe...

I also use lexer/parser generators, and I like that there are more tools/choices available in C/Python than in OCaml.


OSH behaves like that too -- it's POSIX and bash compatible

At some point I will update the home page to make that a bit more clear - https://oils.pub/


You just got a convert! Huge value in sh compatibility. Bash, not so much!

(author here) The best way to think about it is like Clang (OSH) + Swift or Rust (YSH) in the same project, as I explained in a table here:

https://www.oilshell.org/blog/2024/09/retrospective.html#oil...

You are right in the sense that Clang never replaced GCC on say Debian or Red Hat. As far as I remember, there are only a couple minor distros and one BSD that use Clang as the default.

So even though Clang is extremely compatible with all sorts of GCC quirks, inertia is still strong

And to probably 90% of people, Clang does the same thing as GCC. Probably 90% of people couldn't tell the difference between the two

But I think many people are glad Clang exists, e.g. it pushed GCC forward (at least in error messages, in modularity, and probably more)

And some people use ONLY Clang, not GCC

---

On YSH, most people do think there needs to be a "clean slate" successor language to shell, and that's what YSH is aiming for. Notably, it's informed by re-implementing almost all of bash from scratch, and shares the same runtime

The reasoning behind that is that I noticed that sort of like "make replacements", many alternative shells [1] are better in one dimension than their predecessor, but worse in other dimensions.

Our goal is to be better in all dimensions, so a superset is a good way to achieve that. We learned the hard way that these 30-, 40-, 50- year old tools have very diverse users and usages - i.e. most people might use 10% of the features, but across all users, that's 100% of the features

Nonetheless, OSH has been the most bash-compatible shell, by a mile, for a few years, and it's only getting more compatible. How fast it converges depends on contributions, and we're getting good PRs, but can always use more

YSH also has a bunch of users that are providing great feedback, and helping to make it stable

https://github.com/oils-for-unix/oils/wiki/Contributing

[1] https://github.com/oils-for-unix/oils/wiki/Alternative-Shell...


How so? I don’t recall this, and I used Travis, and then migrated to GitHub actions.

As far as I can tell, they are identical as far as testing locally. If you want to test locally, then put as much logic in shell scripts as possible, decoupled from the CI.


The shell also supports shell scripting! You don't need Just or Make

Especially for Github Actions, which is stateless. If you want to reuse computation within their VMs (i.e. not do a fresh build / test / whatever), you can't rely on Just or Make

A problem with Make is that it literally shells out, and the syntax collides. For example, the PID in Make is $$$$, because it's $$ in shell, and then you have to escape $ as $$ with Make.

I believe Just has similar syntax collisions. It's fine for simple things, but when it gets complex, now you have {{ just vars }} as well as $shell_vars.

It's simpler to "just" use shell vars, and to "just" use shell.

Shell already has a lot of footguns, and both Just and Make only add to that, because they add their own syntax on top, while also depending on shell.


I thought the meme was woodworking or metalworking


I would have said something about a microbrewery.


I think microbrewery might be a little 2010, you probably want to start a coffee roastery.


A few extra <object> in a blog post is a worthwhile tradeoff, if you're literally using raw HTML.

- HTTP/1.1 (1997) already reuses connections, so it will not double latency. The DNS lookup and the TCP connection are a high fixed cost for the first .html request.

- HTTP/2 (2015) further reduces the cost of subsequent requests, with a bunch of techniques, like dictionary compression.

- You will likely still be 10x faster than a typical "modern" page with JavaScript, which has to load the JS first, and then execute it. The tradeoff has flipped now, where execution latency for JS / DOM reflows can be higher than network latency. So using raw HTML means you are already far ahead of the pack.

So say you have a 50 ms time for the initial .html request. Then adding some <object> might bring you to 55 ms, 60 ms, 80 ms, 100 ms.

But you would have to do something pretty bad to get to 300 ms or 1500 ms, which you can easily see on the modern web.

So yes go ahead and add those <object> tags, if it means you can get by with no toolchain. Personally I use Markdown and some custom Python scripts to generate the header and footer.


Yes, I’d add that not merely “raw html” but a file on disk can be served directly by Linux without context switches (I forget the syscall), and transferred faster than generation.


sendfile? splice? io_uring?


Yes, most likely sendfile.


I think what's missing is that this an extremely low level programming model:

Upon receipt of this message in the event E, the target consults its script (the actor analogue of program text), and using its current local state and the message as parameters, sends new messages to other actors and computes a new local state for itself.

It doesn't say anything about whether you do:

    - nginx-style state machines in C
    - callbacks in C++, or C++ 20 coroutines
    - async/await in Rust
    - Goroutines in Go
    - async/await in Python or JS, with garbage collection
etc. I don't think the "actor model" really means that much these days.

What's a "canonical" and successful actor model program? What can we learn from such programs?

I think if you ask 5 people you'll get 5 different answers.

---

Also, with

    __u8    opcode;         /* type of operation for this sqe */
    __s32   fd;             /* file descriptor to do IO on */

then you have lost all static typing. It is too low level, so the analogy doesn't really hold up IMO.

Also, I don't understand why it's "do files want to be actors?", not "do Unix PROCESSES want to be actors?"

(copy of lobste.rs comment)


Which is why when you have programming languages with rich runtimes and ecosystem, the OS kind of becomes irrelevant.

"an operating system is a collection of things that don't fit inside a language; there shouldn't be one"

-- Dan Ingalls

So what happens is that those runtimes built on top of whatever low level primitives are available, and that is about it.

Even considering UNIX alone, many ways to do asynchronous IO aren't even part of POSIX, it has remained specific to each UNIX flavour.

To some extent, UNIX/POSIX API surface has been the C and C++ standard library that WG14 and WG21 didn't want to take over into ISO, but almost every C and C++ developer expects to exist anyway.


> then you have lost all static typing

Do you need it at this level? At some point everything is a bit-field. We impose typing to aid our mental models and build useful abstractions.

When interacting with the kernel we can let go, then reclaim, our types


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: