Hacker News new | past | comments | ask | show | jobs | submit | simonask's comments login

Even more tangentially: This is one of the reasons I'm having a great time developing a game in Rust.

You never want lock contention in a game if at all avoidable, and in a lot of cases taking a lock is provably unnecessary. For example: Each frame is divided into phases, and mutable access to some shared resource only needs to happen in one specific phase (like the `update()` function before `render()`, or when hot-reloading assets). With scoped threads and Rust's borrowing rules, you can structure things to not even need a mutex, and be completely certain that you will receive a stern compiler error if the code changes in way so you do.

When possible, I'd always love to take a compiler error over a spike in the profiler.


As a musician, glad to see audio production mentioned as an important use case.

Even a 10 ms delay is noticeable when playing an instrument. A lot of processing is required to produce a sound: Receive the input over a hardware interface, send it into userspace, determine what sound to play - usually a very complicated calculation with advanced plugins - potentially accessing many megabytes of raw sound data, apply chains of effects, mix it all together, send megabytes of uncompressed sound data back to the kernel, and push it out through an audio interface.

The more predictable the kernel can be, the more advanced audio processing can be, and better music comes out. Every single microsecond counts.

Modern software instruments can emulate acoustic instruments with a high degree of precision and realism, and a huge range of expressive freedom, but that takes a lot of processing power in real time.


I Dunno. "The main aim of the PREEMPT_RT patch is to minimize the amount of kernel code that is non-preemptible", that's nice and all, but there are still no guarantees, which (in some interpretation of "real time") a RTOS is all about. For some applications the effort might suffice, but others will insist on those guarantees. For music recordings, I (perhaps naively) would expect a decent audio card with its own processor and (RT) firmware would yield better results.

> For music recordings, I (perhaps naively) would expect a decent audio card with its own processor and (RT) firmware would yield better results.

And is commonly used in high end professional settings for audio...

I'm not speaking against the above comment. Audio is hard, but purely because of the timelines required with extremely low buffer sizes. It's pretty common to be running at buffer sizes at 256 samples or lower... However on a modern processor thats very low latency... At 48000 samples per second a buffer of 256 samples would generally give you a latency of 5ms give or take... But that also gives you 5ms to do all processing of that buffer which can be significant as was mentioned above.

Honestly though in my experience, FPGAs are the way to go for super low latency audio, most modern implementations are measured in micro seconds for processing stages.

The problem is you have to do all the processing on the FPGA since any round trip to the CPU would take eons compared.

Avid HDX is the pricy but industry standard for studio and post production work. There are other options but because most people don't need them so they aren't popular, most people are fine with restricting processing during recording which is when the latency is important and then when producing and mixing latency could be several hundred ms and you don't care...

Source: me, professional audio "engineer" with over a decade in the industry who has recorded multiple commercially successful albums, ad spots, generic corporate videos, radio spots and entire radio shows etc...


I can easily manage several instruments at 1.3ms latency while still using my machine for gaming and such on this patchset without more than the occasional xrun, where the same machine can barely manage 10ms latency if I don't have _anything_ running but the audio applications.

Something doesn't have to be perfect to enable a usage.


Who is "they"? Seriously, who?

Well, if you work on security-critical software that's currently written in a memory-unsafe language, I would say that's a good candidate for a Rust rewrite. Likewise if you work on a widely used library that's awkwardly slow because it's written in python.

Which is not exactly the same as wanting everybody to rewrite everything in Rust, but I suppose it's the sort of thing that annoys nineteen999.

There are also a lot of devs rewriting things in Rust for their own entertainment or whatever, which I think is the main source of the "rewrite everything in Rust" meme.


It's a good candidate for a rewrite in a memory-safe language, of which is Rust but one. Some people don't like the Rust syntax, the cargo ecosystem's resemblence to npm, the gulf between what's in stable vs nightly rust (ie. ISO standardization might be nice to consider at this point), personal attacks on experienced C/C++ programmers by well-meaning but junior devs in the Rust community who would be better served (re)writing stuff to prove their point rather than evangelizing so much. Some us also find the sensitivity to any kind of criticism of these aspects a little amusing, so we occasionally poke fun in return.

There are still plenty of constrained environments, architectures not yet supported, a lack of mature libraries for 2D/3D graphics amongst other things, that make Rust not a good fit yet for many projects where C/C++ already works. When Rust gets there and it and it's community matures a bit, we will all cheer. Until then ... we'll just get back to work.


The Rust Evangelism Strike Force

Are They in the room with us?

The parent comment referred to "Rustaceans". Check the first two words.

I don't know. In spite of Boats' great points, I think the programmer intuition definitely aligns more with it being a type property, in the sense that it enables the most interesting use case: self-referential values. All of that interacts badly with move semantics, and especially the lack of "guaranteed copy elision", but nevertheless...

I'm curious, what drama in the Rust community are you referring to?

I see some drama associated with Rust, but it's usually around people resisting its usage or adoption (the recent kerfuffle about Rust for Linux, for example), and not really that common within the community. But I could be missing something?

Zig is great, but it just isn't production ready.


>I'm curious, what drama in the Rust community are you referring to?

https://news.ycombinator.com/item?id=36122270 https://news.ycombinator.com/item?id=29343573 https://news.ycombinator.com/item?id=29351837

The Ashley "Kill All Men" Williams drama was pretty bad. She had a relationship with a core Rust board member at the time so they added her on just because. Any discussion about her addition to the board was censored immediately, reddit mods removed and banned any topics and users mentioning her, etc.


Yes, she was dating him at the time. It did not go well for Rust, https://news.ycombinator.com/item?id=28633113 https://news.ycombinator.com/item?id=28513656

Ashley is just one out of many, unfortunately. Other former and current top contributors share similar qualities. Those qualities tend to trigger unnecessary explosions like last year's https://www.reddit.com/r/rust/comments/13vbd9v/on_the_rustco....


[flagged]


I have a few pieces of info, but they're not linkable. Not yet.

IMHO, I don't think she was trash. I think she was the face of Rust's lack of integrity, in many respects.


On drama: https://users.rust-lang.org/t/why-is-there-so-much-mismanage...

Also, Zig is set to release 1.0 beta in November.


I think the replies in that thread actually do a good job of describing how it is a bit overblown.

As for Zig, I hope they make it. I think I kind of see why people are excited about it, but fundamentally the reason I'm not super hyped is that it doesn't seem to really enable anything new. It's far more expressive than C, but it doesn't make it easier to manage inherent complexity (to my understanding - haven't played with it a lot).


Zig is not even close to a 1.0 beta this year. Or even next year.

For example, the async problem still exists and still has absolutely no viable path forward, or even MVP approach.


Any source on the zig 1.0 thing? As far as I can tell it is not even on thr horizon.

I'm sorry, but that feels like an incredibly poorly informed decision.

One thing is to decide to vendor everything - that's your prerogative - but it's very likely that pulling everything in also pulls in tons of stuff that you aren't using, because recursively vendoring dependencies means you are also pulling in dev-dependencies, optional dependencies (including default-off features), and so on.

For the things you do use, is it the number of crates that is the problem, or the amount of code? Because if the alternative is to develop it in-house, then...

The alternative here is to include a lot of things in the standard library that doesn't belong there, because people seem to exclude standard libraries from their auditing, which is reasonable. Why is it not just as reasonable to exclude certain widespread ecosystem crates from auditing?


> One thing is to decide to vendor everything - that's your prerogative - but it's very likely that pulling everything in also pulls in tons of stuff that you aren't using, because recursively vendoring dependencies means you are also pulling in dev-dependencies, optional dependencies (including default-off features), and so on.

What you're describing is a problem with how Cargo does vendoring, and yes, it's awful. It should not be called vendoring, it is just "local mirroring", which is not the same thing.

But Rust can work just fine without Cargo or Crates.io.


I think the language is doing great, not least _because_ it has slowed down a bit. To me it's an indication that is has found a decent plateau right now where people can get useful things done, and where the Rust language and compiler teams are eager to provide a stable product that doesn't break things willy-nilly.

A lot of the complaints I see are not super well thought through. For example, a lot of people complain about async being too explicit (having a different "color" than non-async functions), but don't consider what the ramifications of having implicit await points actually are.

Even in this otherwise fine article, some of those desired Fn traits are not decidable (halting problem). There's a bit of a need to manage expectations.

There are definitely legitimate things to be desired from the language. I would love a `Move` trait, for example, which would ostensibly be much easier to deal with than the `Pin` API. I would love specialization to land in some form or another. I would love Macros 2.0 to land, although I don't think the proc-macro situation is as bad as the author presents it.

The current big thing that is happening in the compiler is the new trait solver[0], which should solve multiple problems with the current solver, both cases where it is too conservative, and cases where it contains soundness bugs (though very difficult to accidentally trigger). This has been multiple years in the making, and as I understand it, has taken up a lot of the team's bandwidth.

I personally like to follow the progress and status of the compiler on https://releases.rs/. There's a lot of good stuff that happens each release, still.

[0]: https://rustc-dev-guide.rust-lang.org/solve/trait-solving.ht...


I’ve said this before, but the whole function colour thing could be summarised as: “here’s a pain point easily addressed with monads, but I don’t want to consider monads, so let’s turn everything inside out to avoid thinking about monads.”

To which many sensible people respond “I don’t want to think about monads either, but is the pain point really that bad?”


There are several sources of timing information, and I think in this context "nanosecond precision" just means that Tracy is able to accurately represent and handle input in nanoseconds.

The resolution of the actual measurements depends on the kind of measurement:

1. If the measurement is based on high resolution timers on the CPU, the resolution depends on the hardware and the OS. On Windows, `QueryPerformanceFrequency()` returns the resolution, and I believe it is often in the order of 10s or 100s of nanoseconds.

2. If the measurement is based on GPU-side performance counters, it depends on the driver and the hardware. Graphics APIs allow you to query the "time-per-tick" value to translate from performance counters to nanoseconds. Performance counters can be down to "number of instructions executed", and since a single instruction can be on the order of 1-2 nanoseconds in some cases, translating a performance counter value to a time period requires nanosecond precision.

3. Modern GPUs also include their own high-precision timers for profiling things that are not necessarily easy to capture with performance counters (like barriers, contention, and complicated cache interactions).


Yes, that's my understanding and why I asked. I disagree about "in this context", though, which is a pitch. If I was going to buy hardware that claimed ns resolution for something I was building I would expect 1ns resolution, not "something around a few ns" and not qualified "only on particular hardware". If such a product were presenting itself in a straightforward way to be compared to similar products and respecting the potential user it would say "resolutions down to a few ns" or something more specific but accurate.

There was even a discussion on this not long ago on how to market to technical folks and things to not do (this is one of the things not to do)

https://www.bly.com/Pages/documents/STIKFS.html

https://news.ycombinator.com/item?id=41368583


Here is another perspective: In the 2000s, many of us were children and teens. Now we’re not, and definitely don’t want our own kids to be exposed to the same Internet as we were back then…

People romanticize that time way more than it deserves.

I’m not a fan of censorship, but there’s a case to be made for reasonable adults deciding the ground rules. Case in point: Twitter.


> Now we’re not, and definitely don’t want our own kids to be exposed to the same Internet as we were back then…

Eehhh speak for yourself. I wish the young generation could experience the free and open and wild Internet that I did in the 90s and 00s...

Now they're stuck with ~10 websites run by giant megacorporations and "adults deciding the ground rules"...

Give the kids some credit, they're smarter than you think..


>I wish the young generation could experience the free and open and wild Internet that I did in the 90s and 00s...

Those were exhilarating and heady times for many teenagers and children, to be sure, but you don't have any children of your own, do you?


I have kids and I wish they could have the old web, warts and all it was a much better place for learning and connecting than what we have now.

Why would you go through the trouble, when nobody else is doing that, because the success of the system is highly dependent on network effects?

These things have first-mover disadvantage, and last-mover advantage.


But it's not dependent on network effects, you can follow people and they can follow back even if you run your own server.

You miss being in the explore tab but I don't think that's the main source of traffic, and various bots or groups that re-share toots will likely spread your account far and wide.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: