Hacker News new | past | comments | ask | show | jobs | submit | eknkc's comments login

I think it makes a lot of sense for Vercel to push RSC and all the complexity of new Next.

- They sell compute / bandwidth. You can not make a lot of money if people are building SPAs that load from a CDN and execute on client.

- SSR kind of gives them some server compute but as soon as the initial page is loaded, we are back to point 1.

- Here comes RSC then. And they pushed hard. Made it look like the best thing ever. Oh the initial load time is the most important metric. You gotta make sure you hit this score on whatever benchmark thing…

- They also kind of acquired React. It is basically Vercel’s now so they can dictate the development direction.

- All this with almost no benefit to developers (compared to something like pre app dir next) but likely helps them a lot in terms of revenue.

I mean this might read like “vercel evil” post but it is what corps do and they seem to do it well.

I used next on a project recently and will not touch it again because the hype did not materialize as any type of gains during our own development.


Likely because they are compiling Java with WasmGC extensions and stuff. If you try C, Rust, Zig etc they tend to run extremely fast in WASM.

Go also has a WASM target which runs pretty slow compared to its native binaries. GC extensions might help but as far as I can remember, Go's memory model does not fit the WasmGC so it might never be implemented.


I'm somewhat curious why they even chose Java over Rust or Zig, considering the spreadsheet data paths should be relatively straight forward even with a lot of clone() activity.

The calculation engine was written in Java before Rust or Zig were invented.

Gotcha, I made the mistaken assumption this was a rewrite.

If you are on macOS, give OrbStack a shot. It works surprisingly well and fast.

+1. Love OrbStack. It’s embarrassing how bad docker is by comparison.

If your company is paying for docker desktop, stop right now and switch to OrbStack.


I'll give it a shot - how useful are the Pro features?

I subscribed, but only to be license-compliant. I've never used a pro feature, though I've looked at things like the debug mode. Honestly I just use the same set of command line tools I'd use with the official Docker client.

Does it keep things contained?

Does it mess up or clutter the system?

Currently I am using a Fusion VM to have something similar to WSL2 on Mac. Is this a better solution?


Yeah, I took a look at the code and the prompt documentation (which is in Chinese) and I came to the conclusion “why?”. It is interesting though.

I think the last sample needs a `fba.reset()` call in between requests.

BTW, I used zig a lot recently and the opaque allocator system is great. You can create weird wrappers and stuff.

For example, the standard library json parser will parse json, deserialize a type that you requested (say, a struct). But it needs to allocate stuff. So it creates an arena for that specific operation and returns a wrapper that has a `deinit` method. Calling it deinits the arena so you essentially free everything in your graph of structs, arrays etc. And since it receives an upstream allocator for the arena, you could pass in any allocator. A fixed stack allocator if you wish to use stack space, another arena, maybe jemalloc wrapper. A test allocator that checks for memory leaks.. Whatever.


When I read json I always end up holding onto values from some of the keys. Sometimes the keys too if the node is abstract enough.

I assume the receiver then has to know it has to clone all of those values, yes?

That seems a little tricky for general code and moreso for unit tests.


Unit tests are trivial because you can probably use a single arena that is only reset once at the end of the test. Unless the test is specifically to stress test memory in some form.

> I assume the receiver then has to know it has to clone all of those values, yes?

The receiver needs to understand the lifetime any which way. If you parse a large JSON blob and wish to retain arbitrary key/values you have to understand how long they're valid for.

If you're using a garbage collection language you can not worry about it (you just have to worry about other things!). You can think about it less if the key/values are ref-counted. But for most C-like language implementations you probably have to retain either the entire parsed structure or clone the key/values you care about.


I assume the following is perfectly doable from a technical perspective but is there any community support for using multiple allocators in this case to, eg parse general state to an arena and specific variables you want to use thereafter to a different allocator to remain long lived?


You can pass the json deserializer an allocator that is appropriate for the lifetime of the object you want to get out of it, so often no copying is required.


Right, but that means you lose the simplicity and performance benefits of an arena allocator.


I've mainly written unusual code that allocates a bunch of FixedBufferAlocators up front and clears each of them according to their own lifecycles. I agree that more typical code would reach for a GPA or something here. If you're using simdjzon, the tape and strings will be allocated contiguously within a pair of buffers (and then if you actually want to copy from the tape to your own struct containing slices or pointers then you'll have to decide where that goes), but the std json stuff will just repeatedly call whatever allocator you give it.


Well, see, the problem is that you’re going to have to keep the allocator alive for the entire lifetime you use any data whatsoever from the JSON. Or clone it manually. Both seem like they largely negate any benefit you get from using it?


Why wouldn't this be better done with a class that takes care of its memory when it goes out of scope?


There are no automatically-invoked destructors in Zig.


Perhaps if this was added it would prove to be a better solution in this case?


It would prevent you from writing the bug at the top of the thread.

I have stopped considering this sort of thing as a potential addition to the language because the BDFL doesn't like it. So realistically we must remember to write reset, or defer deinit, etc. This sort of case hurts a little, but people who are used to RAII will experience more pain in cases where they want to return the value or store it somewhere and some other code gains responsibility for deinitializing it eventually.


On the other hand, is more clear where things are released. When over relying on destructors, often it becomes trick to known when it happens and in which order. This kind of trade off is important to take into consideration depending on the project.


Maybe I'm too used to C++ and Rust but I don't find it tricky at all. Its very clearly defined when a destructor (or fn drop) is called as well as the order (inverse allocation order).

What I would like to see would be some way of forcing users to manually call the dtor/drop. So when I use a type like that I have to manually decide when it gets destroyed, and have actual compile checks that I do destroy the object in every code path.


I will admit that I really miss the "value semantics" thing that RAII gives you when working in Zig. A good example is collections, like hash tables or whatever: ownership is super clear in C++/Rust. When the hash tables goes away, all the contained values goes away, because the table owns its contents. When you assign a key/value, you don't have to consider what happens to the old key/value (if there was one), RAII just takes care of it. A type that manages a resource has value semantics just like "primitive" types, you don't have to worry.

Not so in Zig: whenever you deal with collections where either the keys or values manages a resource (i.e. does not have value semantics), you have to be incredibly careful, because you have to consider lifetimes of the HashMap and the keys/values separately. A function like HashMap.put is sort of terrifying for this reason, very easy to create a memory leak.

I get why Zig does it though, and I don't think adding C++/Rust style value semantics into the language is a good idea. But it certainly sometimes makes it more challenging to work in.


Among systems languages, I've mostly used C and Zig. I don't think dtor order is tricky so much as I think that defaulting to fine-grained automatic resource management including running a lot of destructors causes programs that use this default to pay large costs at runtime :(

I think the latter problem is impossible, you end up needing RAII or a type-level lifetime system that can't express a lot of correct programs. I would like something that prevents "accidentally didn't deinit" in simple cases, but it probably wouldn't prevent "accidentally didn't call reset on your FixedBufferAllocator" because the tooling doesn't know about your request lifecycle.


You want a code analyzer on top of a language like zig. I think people think it's hard because it would really be hard for C. Probably would be MUCH easier in zig.


One of Zig's design goals is to have as little implicit behavior as possible.


I am well aware.


Because you can have an arbitrary number of object that can all be freed in O(1), instead of traversing a tree and calling individual destructors. An arena per object makes no sense


Need varies. Some memory needs to be still alive after the parse process.


fixed, thanks.


I guess you could place a zeroing allocator wrapper in between the arena and it's underlying allocator. That would write zero to anything that's getting freed. Arena deinit will free anything allocated from the underlying allocator so upon completion of each request, used memory would be zeroed before returned back to the main allocator.

And that handler signature would still be the same. Which is the he whole point of this article so, yay.


Probably need to do a pass to find all \r chars, check if the next char is \n and if so, discard it. Otherwise convert it to \n.

edit: Yeah, does exactly that: https://chromium-review.googlesource.com/c/chromium/src/+/55...


It is weird that everyone has been bashing these features. What is all the negativity? I myself do not like pushing AI into everything but I think this is a great use case for LLMs. And it was already opt-in.

Just tried "find all pdf files larger than 10MB" and it came up with "find . -name "*.pdf" -size +10M". Maybe this was easy but I don't know all arguments of all cli commands by heart and it works beautifully.


Try it for macOS specific things and it chokes. It’ll hallucinate commands, send you shit for Linux, or give you stuff that worked 20 years ago. Find is an old, nearly universal command (good luck on any of the other GNU-utils commands that are 15 years newer than the binaries shipped in macOS).

I have not found an LLM that knows any of that.

When I need a find command, I open `man find` and read and learn.


I have 627 items in my 1P vault. That won't work.


there are always edge cases


I do. It is a critical software for me. Why would I use something inferior?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: