Hacker News new | past | comments | ask | show | jobs | submit | more indiv0's comments login

I'm having trouble parsing what you wrote. You say that you don't need to have unquestionable authority to use this style of language and then proceed to give an example of an unquestionable authority using this style of language.


I think I understood the point, and I'll give another example: your coworker (let's say you both report to the same boss) doesn't like the way you're talking to them and tells you "Hey indiv0, we don't say X here... we just don't say X here".

If you choose to question the statement, you could cynically assume that the other person is going to use anything you say as evidence that you're not professional/are harassing them/make them feel uncomfortable against you. Better to just shut up rather than say something damming.

I've been in a situation like this, and being on the receiving end of the "we don't do that here" is received as "shut up, do what I say, and don't question me".


I updated it a bit. Maybe it was a bad example, but what I'm saying is having the option of arresting someone or giving them a fine isn't unquestionable authority -- if you're just mouthing off there's a limit to what the police can (legally) do to you.

Regardless, my point is in an average job there's a lot of people that don't have direct authority over you but have the power to make your life really unpleasant. A more senior coworker probably can't get you fired, but they may well be involved in important decisions around you.


It hardly needs to be someone who can "make your life really unpleasant". There's no implied threat.

If I were told "We don't do that here" at a new job, I would learn that I stepped across a line and likely a generally agreed-upon line, and that if I continue to do so, I will be causing problems for myself. And I mean "causing problems for myself" in the same way that any other behavioral quirk causes problems, from chewing with your mouth open to yelling at someone for not putting paper in the printer.

But then, I'm also not someone who feels the urge to debate what line was crossed, whether or not my intentions mattered when I crossed the line, the precise location of the line and any others that I might cross in the future, or whatever it is that them as cause this particular issue want out of the conversation.


Is this basically an Effect System?

Either way I'm excited to be able to generalize over asyncness.

*EDIT*: I now see that they explicitly address this question in the post:

> The short answer is: kind of, but not really. "Effect systems" or "algebraic effect systems" generally have a lot of surface area. A common example of what effects allow you to do is implement your own try/catch mechanism. What we're working on is intentionally limited to built-in keywords only, and wouldn't allow you to implement anything like that at all.

> What we do share with effect systems is that we're integrating modifier keywords more directly into the type system. Modifier keywords like async are often referred to as "effects", so being able to be conditional over them in composable ways effectively gives us an "effect algebra". But that's very different from "generalized effect systems" in other languages.


No only that, but it's perpetually out of stock in most regions as well.


I'm not well versed in queuing theory, but I would assume that's rather the point. The existence of an unboundedly growing queue implies an inability to keep up with demand, so someone has to lose. The most shameless queue jumpers get served earlier because they have the highest priority. Anyone stuck in the queue didn't have a high enough priority in the first place.

From a utilitarian standpoint it makes sense :shrug:


With normal queuing, you get served eventually and can calculate the time it will take to get what you need. I fail to see how removing both of those parts makes it more efficient, and intuitively it seems that it would mostly lead everyone to avoid queuing unless absolutely necessary or to only try to buy things that surely nobody else wants. Mean wait time down, only the least desired products get bought, and even those only when starvation is the alternative.


Not to be reductive, but why not `docker run` the sha256 of the last successful stage of the build and run the next command manually?


That's what I generally do, but it's tedious as CI images are written to have as few layers as possible. So debugging often also involves changing Dockerfiles.


Because there’s no async fn on traits yet, almost all “async” traits are returning Pin<Box<dyn Future<Output=T>>> under the hood.

Same principle also applies if you’re trying to put a future in a struct (to create a named future). One of the fields will have more or less that type. This is useful for a few reasons (e.g. making it easy to refer to a specific future in type signatures or for writing futures out by hand).


As someone with extensive experience with Rust and a teensy bit of experience in Go I can tell you that I adore Rust for every use case I’ve tried it out for *except* for network services. It works ok for low level proxies and stuff like that; but Python/Flask-level easy it is not.

Meanwhile my experience with Go has been the reverse. I’ve found it acceptable for most use cases, but for network services it really stands out. Goroutines <3


Yes. Rust is for what you'd otherwise have to write in C++. It's overkill for web services. You have to obsess over who owns what. The compiler will catch memory safety errors, but you still have to resolve them. It's quite possible to paint yourself into a corner and have to go back and redesign something. On the other hand, if you really need to coordinate many CPUs in a complicated way, Rust has decent facilities for that.

Goroutines don't have the restrictions of async that you must never block or spend too much time computing. Goroutines are preemptable. The language has memory-safe concurrency (except for maps, which is weird) and has garbage collection. So you can have parallelism without worrying too much about race conditions or leaks.

Go comes with very solid libraries for most server-side web things, because they're libraries Google uses internally. Rust crates have the usual open source problem - they get to 95%-99% debugged, but the unusual cases may not work right. Go has actual paid QA people.

Go has limited aims. It was created so that Google could write their internal server side stuff in something safer than C++ and faster than Python. It has a limited feature set. It's a practical tool.

I write hard stuff (a multi-thread metaverse viewer) in Rust, and easy web stuff (a data logger which updates a database) in Go. Use the right tool for the job.


> You have to obsess over who owns what.

Most of the time you can also avoid this by just copying data using .clone(). This adds a tiny bit of overhead, which is why it isn't a default - but it'll still be comparatively very efficient.

Similarly, there are facilities for shared mutation (Cell/RefCell) and for multiple owners extending the lifetime of a single piece of data (Rc/Arc). It's not that hard to assess where those might be needed, even in larger programs.


This big time. Rust has comparable ergonomics to high level languages once you stop trying to optimize everything and start throwing around clones liberally. And the nice thing is that if you do need to optimize something later you can trust the compiler that if it compiles, then it’s correct (barring interior mutability/unsafe).


> The language has memory-safe concurrency (except for maps, which is weird)...

My understanding is you should operate the other way around. Things aren't safe for concurrent mutation unless it's explicitly documented as safe.

> So you can have parallelism without worrying too much about race conditions or leaks.

You might not worry, but I find these the two easiest classes of Go bug to find when entering new codebases ;).

Still, I agree Go is easier to get a web service up and running with.


Regarding concurrency and Go, it's truly an awful language from the view of concurrency when coming from Rust. It's a loaded footgun where you have to keep tons implicit details in your mind to get it right.

https://eng.uber.com/data-race-patterns-in-go/


Minor nitpick but goroutines are absolutely not preemptable, they’re cooperative. The go compiler regularly sticks in yield statements around IO, and certain function calls, but you absolutely can starve the runtime by running a number of goroutines equal to the number of cores doing heavy computation, just like Node.JS’s event loop.


This changed in Go 1.14 on most platforms: https://go.dev/doc/go1.14#runtime


TIL


Noob question - What does go lack that makes it hard to write your multi threaded meta verse viewer


Any sort of safety: https://eng.uber.com/data-race-patterns-in-go/

Go's concurrency model is bog-standard shoot your foot off shared memory.

A channel is not magic, it's a multiple-producer multiple-consumer queue, you can unwittingly send a pointer over it (or a pointer-ish, like a hashmap) and boom you've got unchecked concurrent mutations, and in Go that even opens you up to data races (memory unsafety).

And because channels are slow, it's common to go down the stack, and hit further issues of mis-managing your mutexes or waitgroups, to say nothing of spawning a goroutine by closing over a mutable location.

You can do it right, in the same way you can do it right in C, C++, Java, Ruby, or Python. And arguably it's more difficult in Go because it drives you significantly more towards concurrency (and spawning tons of goroutines) without really giving you better tools to manage it (the only one I can think of is select).


Long time rust/go dev, go is so much faster to write for web services. For the rest rust is pretty nice.


> I adore Rust for every use case I’ve tried it out for except for network service

thats hilarious, my daily driver language is elixir which is the goat for network services. I saw rust as the perfect compliment for that where I need everything else.


Could you please expand on this a bit? What are some example services that would be better written in go vs rust?


I was expecting some clickbait/spam (the layout of the website has that feel) but this was surprisingly super in-depth and 100% matches up with my experience doing prompt engineering.

There's a fine line between so descriptive that the AI hits an edge case and can't get out of it (so every attempt looks the same) and not being descriptive enough (so you can't capture the output you're looking for). DALL-E is already incredibly fast compared to public models and I can't wait for the next order-of-magnitude improvement in generation speed.

Real-time traversal of the generation space is absolutely key for getting the output you want. The feedback loop needs to be as quick as possible, just like with programming.


I'm surprised at the artistic skill of the person who wrote the book, in contrast with the terrible web UI skill of the person who designed the site.


Wouldn't surprise me too much, if they were the same person, but had vastly different amounts of experience with the different media?


As someone who makes very weird and experimental stuff, DALL-E is like a Segway and CLIP is like a horse (especially with those edge cases that tend to self-engorge/get worse if you aren't clever). It's a shame compute costs aren't much different between the two (correct me if I'm wrong) - I don't think there is much of a purely artistic process with DALL-E, although I do like to use DALL-E Mini thumbnails as start images or upscale testers.

>Real-time traversal of the generation space is absolutely key for getting the output you want.

I've been sketching around a two-person browser game where a pair of prompters can plug things in together in real-time :D


Another interesting thing with prompt engineering is that attempt #1 with prompt x might yield something you don't want, but attempt n might yield something you do :)


1. Don't use the `async` ecosystem.

2. Prefer dynamic dispatch to monomorphization (i.e., use fewer generics).

3. Don't use proc macros (i.e., don't depend on the `syn` crate, even transitively).

Easy to say; hard to put into practice. But that's all there is to it.


> Don't use the `async` ecosystem.

I want to make it /very/ clear that async isn't to blame at all for the pathological build times described here. It's a bug about traits and lifetimes, both very core concepts of Rust that you deal with even if you stay away from async code.

async rust will certainly be more ergonomic once some more improvements land (hopefully later this year), but I don't feel like it deserves all the sighs it's been publicly getting these past few months. (And I /love/ to complain. I've written pieces named "Surviving Rust async interface", "Getting in and out of trouble with Rust futures", "Pin and suffering", etc.)

> Prefer dynamic dispatch to monomorphization (i.e., use fewer generics).

Unless you hit a pathological case as shown in the article, it tends to not be _that_ bad, especially if you enable `-Z share-generics=y` (unstable still, yet enabled by default for debug builds if I remember correctly).

Overall still solid advice - although "use fewer generics" sometimes turns out to be "just turn a big generic type into `Box<dyn Trait>`" (it's not _just_ boxing, that would be `Box<T>`). That's what axum[1] does with all services, and it's never had the compile times issues warp[2] had, for example.

> Don't use proc macros (i.e., don't depend on the `syn` crate, even transitively).

Good news there, I hear there's some progress on the proc-macro bridge (which improves macro expansion performance) AND "wasm proc-macros". I hope this piece of advice will be completely irrelevant in a year (but for now, it's spot-on. using pin-project-lite instead of pin-project is worth it, for example).

[1] https://lib.rs/crates/axum

[2] https://lib.rs/crates/warp


I know there about the bridge improvement (the amazing @nnethercote's work) but can you refer to sources on the wasm proc macros? The last thing I know about them is @dtolnay's watt.

In my experience, however, macros are usually not that problematic and `syn` is a one-time cost.


My understanding of point #2 is that LLVM may still try to devirtualize the call, which would reduce the performance impact -- is that true for Rust? I know it happens sometimes in C++.

Also for proc macros, rust-analyzer seems to struggle with them sometimes as well, so I try to avoid them (outside Serde, which is worth any price) for that reason.


rust-analyzer for a long time expands all kinds of proc-macros. The only project it does not work (although to be honest I didn't really tried) is rustc.


What forthcoming improvements to async are you referring to?


Or don't, because all of those things are great. Compile times aren't that bad unless you're doing a clean uncached build, which is very unlikely.


Async seems to be the first big "footgun" of Rust. It's widespread enough that you can't really avoid interacting with it, yet it's bad enough that it makes people resent the language.


It's really not as bad as it's made out to be. You can paint yourself into a corner with it, but a lot of that is that async is fundamentally more complicated than sync / threaded code, and there's only so much any language can do to paper that over. Rust exposes a lot of details, so it can be complicated to get to grips with how they combine with async in certain corner cases, but the happy path is quite happy even now.

A lot of the async Rust code I work with already looks like `async fn foo() -> ... { do_request().await?.blah().await }`, plus the occasional gathering of futures into a `Vec` to join on. That sort of thing, not much different from Javascript, but with a lot more control of the low-level details.

A good deal of corner cases should get better once async traits are stabilized, which will mean much less need for manually writing out Future types. But honestly, even now it's not that bad. I have a codebase that uses async to read hundreds of thousands of files[1], streaming gunzip them, pass them to another future which streaming parses records from them, and then pushes those parsed records into a `FnMut` closure for further non-async processing. It took a bit of thinking and design to get everything moving together nicely, but that corner of the codebase now is only ~200 lines of pretty straightforward code -- there's like 1 instance of `Unpin`. It's not that bad.

[1]: I know async isn't necessarily faster for reading files, but it started life doing network requests and it can still saturate a 200-core machine so I haven't felt the need to port it over to threads.


Quick aside: if you're willing to live the nightly life (unstable rustc), the `type_alias_impl_trait` feature gets you most of the way to "async trait methods". You still have to have a `Future` associated type, but in impl blocks, it just becomes `type Future = impl Future<Output = Blah>`, and then the compiler infers what the concrete (and probably unnameable, if you use async blocks) type is - no need to mess with `Pin<Box<T>>`.

The most egregious code comes when implementing one of the `AsyncRead`/`AsyncWrite` traits or similar, and that can come up a bunch in backend services, for example if you want to record metrics on how/when/where data flows, apply some limits etc. I'm curious how the ecosystem will adapt once async trait methods land for real.


FWIW I really don't like async in rust. It's improved significantly over the past couple years and it's nowhere near as bad as callback hell in Javascript but things still feel opaque. I've been toying around with a little monitoring agent (think Nagios or Sensu) to keep an eye on my defective LG fridge. So far I've managed to crash rustc twice. Trying to wrap my head around one library (that I was using incorrectly) I managed to "fork bomb" the damn thing and realize that I've little to no insight into the runtime. Try to find the current number of running tasks being managed by tokio…

The beauty of the rust async stuff is that you can move to a multi-threaded runtime as you desire with minimal effort.


> Try to find the current number of running tasks being managed by tokio…

As a heavy user of async Rust in production (at a couple places), resource leaks / lack of visibility into that has been a top issue.

In this area, tokio-console[1] is an exciting development. I have high hopes for it and adjacent tools in the future. (Instrumenting your app with tracing+opentelemetry stuff can help a lot, too).

Until those become featureful/mainstream enough, Go has the upper hand in terms of "figuring out what's going on in an async program at any given time".

[1]: https://lib.rs/crates/tokio-console


Okay that's really cool, I'm going to have to play around with it.


> The beauty of the rust async stuff is that you can move to a multi-threaded runtime as you desire with minimal effort.

This is also a downside, having multi-thread be the default and not single-thread. It introduces some awkward / accidental trait bounds that are annoying to deal with if you want to do thread-per-core type of stuff IIRC.


I respectfully disagree; I don't think concurrency has to be that much more fundamentally complicated. It's likely that Rust's other design decisions are what made concurrency so difficult in Rust.

Pony does fearless concurrency better IMO, and Forty2 shows how we can expand on Pony to be faster and more flexible.

There are other approaches that have emerged recently too. For example, one can apply Loom's memory techniques to most memory management approaches to eliminate the coloring problem, to decouple functions from concurrency concerns.

There are also languages which separate threads' memory from each other which allows them to do non-atomic refcounting, relying on copying for any messages crossing thread boundaries (though that's often optimized away, and could be even less than Rust's clone()ing elsewhere).

One could also apply that technique to a language using generational references, if they want something without RC or tracing GC.

Sometimes I wish Rust waited just a few more years before going all-in on async/await. Alas!


> Pony does fearless concurrency better IMO

Pony is garbage collected. Most of the reasons why Rust async/await are the way it is boil down to the fact that Rust is memory safe without using GC.

> Forty2 shows how we can expand on Pony to be faster and more flexible

I can't tell from a glance, but that also looks garbage collected.

> For example, one can apply Loom's memory techniques to most memory management approaches to eliminate the coloring problem

Assuming you're referring to the JVM Project Loom, that's just M:N threading. This was tried in Rust almost a decade ago. Nobody used it because the performance was not appreciably better than 1:1 threading.

> There are also languages which separate threads' memory from each other which allows them to do non-atomic refcounting

You mean like Rust? Like, that's exactly why Rust can have both Rc and Arc and still be safe.

> relying on copying for any messages crossing thread boundaries (though that's often optimized away, and could be even less than Rust's clone()ing elsewhere).

Ancient Rust did this, but it was removed because with the current immutability and borrow checking rules there is no need for copying anymore. Why would you want copying if you don't need it?

I'm also not going to just accept that clone() could be faster. I mean, I'm sure the clone codegen could be improved by better register allocation or whatever, but I don't think that's what you mean.

> One could also apply that technique to a language using generational references, if they want something without RC or tracing GC.

Why would you want to copy if you don't have to?

> Sometimes I wish Rust waited just a few more years before going all-in on async/await. Alas!

I haven't seen anything here that is better than Rust's async/await, and a lot that's either worse or doesn't fit with the rest of Rust's design.


I'd push back on "concurrency so difficult in Rust" -- because async isn't the only, or even best, way to do concurrency in Rust. I prefer using threads when I can, and Rust makes working with threads quite joyful[1]. I'd cautiously agree that it's possible async wasn't the best model to go "all in" on, though Rust is quite happily multi-paradigm so if something better comes along and has a notably different set of optimal use-cases than threads or async, I wouldn't be surprised to see Rust adopt it as well.

I'm personally sort of skeptical about "color free async" because the models for sync/blocking IO and async IO are so different -- you can paper over the syntax differences, but you're going to be in a world of hurt when the semantic differences arise[2]. I'll admit I haven't tried a color-free async implementation myself though, so it's just speculation / sour grapes :-)

> There are also languages which separate threads' memory from each other which allows them to do non-atomic refcounting, relying on copying for any messages crossing thread boundaries (though that's often optimized away, and could be even less than Rust's clone()ing elsewhere).

Curious what you mean by this -- my understanding is that Rust also does this (i.e., you can `move |x|` a value into a thread and that thread owns it now, and then the thread can hand it back in a `JoinHandle`. That sort of sharing doesn't require an Arc or Mutex, since there's only one owner at a time. Is this something different?

[1]: The other day I turned something reading in files from the filesystem sequentially into a custom threadpool passing blocks of parsed JSON over a MPSC channel that exposed the whole thing as a sequential iterator and it worked first try. I almost didn't believe it until I wrote the tests.

[2]: E.g., "I wrote this and tested it with blocking IO but this syscall isn't supported by io_uring so in async mode it goes to a threadpool and passes some huge object in a message which kills perf with a huge memcpy", or some similar jank. Just spitballing on the type of thing I would fear happening, not a specific example.


The biggest problems with "colorless async" arise with FFI. You really can't abstract over the differences between a real OS mutex and a language mutex when you're interfacing with system libraries that expect locks to actually behave like locks. Otherwise it's a recipe for deadlocks.


I really dislike threads and now I've grokked async (which, granted, took effort), I much prefer that world. I just find the design of my system is much cleaner and more robust than anything I've written in threads.


Why is it a footgun? I have been using it and I didn't notice anything bad with it yet.


I think there's a terminology problem. To me a "footgun" is a feature which provides you with a very easy way to shoot yourself in the foot, hence the name. For example there's no way that the a[b] array index operation should default to not having bounds checks as it does in C and C++. That's a footgun. Rust does pretty well on this front, and I don't think async is especially worse.

But I can see Rust async being more of a gumption trap than many features. A gumption trap is a problem which uses up your motivation before you can work on the thing you actually wanted to do and so there's none left for the actual project.


Isn’t that sort of the definition of a foot gun? That the downside/danger isn’t obvious.


Well only if the danger exists, obviously.


How are people writing event loop based webservices with rust then if they're not using async?


(I don't think async is bad.)

Some folks just use threads, no event loop.

Event loops are also there without async, you could just write against mio or whatever else you choose directly.


They do containerize but it’s not enough. You still need the right drivers and CUDA stuff installed on the docker host, which can be finicky to setup. Not to mention figuring out how to get docker to actually pass the GPU to the container.


I see this has not gotten any better in the last 5 years. For future reference, if you as a dev are interacting with a product that is this hard to use and there is no other option but to use it (CUDA), you should buy their stock.


I'd much rather look out for competitors.

Resting on laurels won't last. If a company stops improving its offer, a competitor might catch up.


The competition has even more broken software (AMD). At least Nvidia's works when you need it to.


I think OpenCL support has been getting better, and it may be possible to run a lot of models with it, but that just doubles your already frustrating amount of hours trying to set the damn thing up.


> Not to mention figuring out how to get docker to actually pass the GPU to the container.

Should just be passing `--gpus all` these days, shouldn't it?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: