Hacker News new | past | comments | ask | show | jobs | submit | more _28jh's comments login

Agreed. It's not a "foreign" function if everything is in WASM. All the VM sees is WASM code; it doesn't make a difference if it was originally written in C or Rust.


"Foreignness" is an incidental property, the actual goal is interoperation. Being able to interoperate with C code from within the sandbox is both useful and something that BEAM doesn't do. In the meantime, there's nothing preventing anyone from doing regular FFI from within the part of the Rust program that lives outside of Lunatic, if for whatever reason the sandbox is insufficient.


Ok.


hCaptcha works on Tor, sometimes.


You mean like a REPL? Most languages are unusable for interactive programming. And it's not a downside if it stops use in a domain that was nobody's goal.


Yes, just pointing out the trade offs here. Btw many functional languages have quite good interactive experiences, (ocaml, f sharp, Haskell , clojure all have decent repl's). Usually they work instantly for small sized projects.


Have you ever seen a procedural language with a good REPL? Python maybe, but definitely no compiled procedural garbage-collectorless language. The "needing to put type signatures" is completely unrelated. I have no idea how that would stop Rust from being good in a REPL.


Sure: https://cdn.rawgit.com/root-project/cling/master/www/index.h...

Can’t speak to its quality, but there’s nothing stopping someone from writing a repl from a sufficient compiler API... and “good” is only limited by inference quality and runtime performance. IDEs are pretty snappy at showing you autocomplete as you type regardless of whether the language has a garbage collector. And performance - well, that’s what caching would be for, and an optimized compiler design that only needs to recompile changed code...

I would also point out the “auto” keyword in CPP likely saves folks a lot of typing ;-) I know it and similar inferences changed my mind on the whole static vs dynamic debate...


Threes made the fatal mistake of charging money on the App Store.


The bigger mistake was being single-platform.


I think the fact that 2048 was first released on GitHub pages was one of the reasons for its success. Virtually limitless resource for free.


One ugly trick for unreachable branches is this:

    assert(!"You gave an invalid letter");
The string literal is a pointer, so !pointer is false. And then you get a nice explanation message when the assertion fails. I wish all assert gave optional explanation messages.


I often do assert(condition && "String explanation") myself.


I would recommend not doing that. Just use the initializer syntax, or write an inline function.


"Blockchain-based vinyl bootstrapper, written in Rust"


Quantum Blockchain Vinyl Coinwallet with RFC1149 Support, written in Literate Haskell


Running serverless


The new spinning Rust.


Allocation logic is much simpler than you think. A solution to a specific problem will always be simpler and faster than a general solution like jemalloc.


Yes? Allocation in a multithreaded program is simple? Balancing allocator space overhead, fragmentation, and allocator performance? What about getting statistics and insight into allocations? Detecting buffer overruns?

You couldn't be more wrong. Behold the source code to jemalloc and despair.

Allocation can be simple in very specific cases where you can use a single threaded arena.


Allocation and deallocation in a perfomant manner, without introducing fragmentation, while being thread safe isn't easy by any means.

Adds in the relevant taxes, such as detecting use after free, etc and that takes a LOT of work


The solution in TFA doesn’t do anything about use after free, does it?


That is a logical fallacy. A solution for a specific _can_ be simpler and faster than a general solution, given enough time. However, jemalloc has had an absolutely huge amount of people-hours invested into optimizing it so it is not unlikely that it'll still be faster for specific problems unless the specific solution also has significant time invested into it.


Their Allocator library is really an arena, a special-purpose allocator that was discussed on HN recently. [1] I think it's fair to say that when not using GC, it's worth looking for a suitable scope for arenas: short-lived, bounded but significant number of allocations. In many servers, an arena per request is appropriate. You can totally beat directly using the global allocator by layering arenas on top, whether the global allocator is jemalloc or anything else. Batching the frees makes a huge difference, both because there are fewer calls to free and because you have to do less pointer-chasing (with potential cache misses) to decide what calls to make to free.

Maybe the arenas reduce the allocations enough (and makes them of reasonably uniform size) such that a simple buddy or slab allocator underneath would beat jemalloc. These simple allocators would have an "unfair" advantage of not having to go through Cgo.

Or maybe just each Allocator (arena) using the Go allocator for its backing allocations would be okay. It'd still be garbage-collected which they've been trying to avoid, but the allocator would no longer looking through each individual allocation, so maybe it'd zippier about freeing stuff.

Note that (as in my other comment) I still think Rust is a better language for this program. In fairness to them, there are plenty of practical reasons they might have ended up here:

* Rust may not have existed or been a reasonable choice when they started coding. In my experience, porting to Rust really is fairly labor-intensive.

* There may be significant other parts of the program where Go/GC is more appropriate.

* Maybe the whole rest of their company's codebase is in Go and they want to use that expertise.

[1] https://news.ycombinator.com/item?id=24762840


I have never seen a well-made but general solution beat a well-made and specific solution for one problem, in complexity or run time, ever. This is very true with allocators. A lot of the time people will just use 'malloc' without any thought into what they're actually allocating. For example, if you only allocate/deallocate from one thread, jemalloc is already way overblown in complexity.


That's not what I meant. If you can muster the time and budget for a well-made specific solution, great. What I was getting at is that due to time and/or budget constraints, most custom solution will not actually be well-made and the implementer would have been better off just picking the battle-tested off the shelf solution.


But TFA isn’t about just adapting some off the shelf quickie solution. It explains all the hoops necessary to cross the CGo barrier and use jemalloc instead of the normal Go garbage collector. ISTM once you put in that LOE, you’re in the space where a specific solution can beat a general one.


At this point, why not just use C?


UB, memory leaks, memory corruption, implicit conversions,...

The benefit of using Go is keeping the memory safety for like 90% of the application, with just a tiny unsafe code portion.

In C, 100% of the source code is unsafe.


Assuming that Go can easily call into C code you can still implement the performance-critical parts in carefully written and tested C and the other 90% in high-level Go.

On the other hand, if implanting a faster allocator fixes performance problems, then there's something bigger amiss in the overall application design. Creating and destroying small objects at such a high frequency that memory management overhead becomes noticeable isn't a good idea in any case, GC or not.


> performance-critical parts in carefully written and tested C

Is something that not even the Linux kernel with its careful and throughout patch review process is not able to achieve.


I believe calling C from Go is a massive pain, and is slow, because of goroutines. Go makes the OS syscalls directly to avoid going through libc.


> UB, memory leaks, memory corruption, implicit conversions,[...] > In C, 100% of the source code is unsafe

Is it perhaps better to focus on context? That is,cost vs benifit wrt context:

- How much safety and what kind and level of safety assurances does the specific application need?

- How much does it cost in development time/friction, application performance, engineering complexity, [insert other relevant cost axes] to achieve the desired level of safety and safety assurances?


As proven by the high integrity security standards, if you want to write safety proven code in C, there is no way around something like MISRA-C, Frama-C, alongside certification tooling like the one sold by LDRA.

https://www.ldra.com/

Naturally this is a kind of expenses that 99% of the companies aren't going to spend until it finally becomes a legal liability to have security exploits on the software.


-Wimplicit, UBSan, ASan,...

The idea that "you can't write safe C" is a big joke. C is as safe as you make it.



that'd be rewrite


They've done the hard work already. It'd be trivial to port.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: