This is exactly right. DOGE deserves no praise. Their goal is not to cut spending to bring money back to the people. Their goal is to gut the government itself and make it ineffective in improving people's lives. They don't actually care whether these departments are "wasteful", and anyone who thinks they do has bought and drunk the snake oil.
This is much more of a manpower and money problem than it is a technical one. Of course it's possible to fork Linux and rewrite it in Rust. But who would spend all that time and energy doing that without the Linux foundations funds and expertise? You'd probably burn out within a few years before ever substantially converting the code base
I'm absolutely not bullish on LLMs, but I think this is kinda judging a fish on its ability to climb a tree.
LLMs are looking at typical constructions of text, not an understanding of what it means. If you ask it what color the sky is, it'll find what text usually follows a sentence like that, and tries to construct a response from it.
If you ask it the answer to a math question, the only way it could reliably figure it out is if it has in its database an exact copy of that math question. Asking it to choose things from a list is kinda like that, but one could imagine that the designers would try to supplement that manually with a different technique from pure LLM.
I often feel that when C++ posts come up, the majority of commenters are people who haven't deeply worked with C++, and there's several things people always miss when talking about it:
- If you're working in a large C++ code base, you are stuck working with C++. There is no migrating to something like Rust. The overhead of training your engineers in a new language, getting familiar with a new tool chain, working with a new library ecosystem, somehow finding a way to transition your code so it works with existing C++ code and isn't buggy and adapts to the new paradigms is all extremely expensive. It will grind your product's development to a buggy halt. It's a bad idea.
- Every time a new set of features (e.g. reflection, concepts, modules, etc.) is released, people bemoan how complicated C++ continues getting. But the committee isn't adding features for the sake of adding features, they're adding features because people are asking for them, they're spending years of their lives writing papers for the committee trying to improve the language so everyone can write better code. What you find horrifying new syntax, I find a great way of fixing a problem I've been dealing with for years.
- Yes, it's a gross homunculus of a language. If I could switch our team to Rust without issues, I would in a heartbeat. But this is the beast we married. It has many warts, but it's still an incredible tool, with an amazingly hard working community, and I'm proud of that.
It's basically like clockwork - you can assume that any post about c++ language evolution is going to have a number of people saying one or all of the following :
1) "CPP keeps getting complex in useless ways. Just use C (maybe C with classes style)". This viewpoint is correct in the sense that modern CPP is essentially a different language than C++98. But I disagree with the rest incredibly strongly - modern C++ is more expressive, safer, and often more performant than the old-school style. Things like unique_ptr, string_view/spans, RAII, etc are very useful and reduce boilerplate code as well as manage complexity.
2) "CPP is garbage, use Rust instead". I have not personally written in Rust, but I do find it to be a very interesting language. I would consider very strongly writing a new project in Rust. But most C++ projects are not new, and although us nerds always love rewriting perfectly working code, it's a good way to shoot your business in the foot.
3) "The template system is obscene". I mean, this is true. :) I do occasionally sprinkle metaprogramming into my code, because it solves some problems incredibly well. But it is essentially a different language grafted on at compile time. if constexpr, concepts, etc help enormously with this problem. And yes, those have all been introduced very recently...
Another point is ecosystem. Imagine you trained your whole team on Rust and got the entire codebase scrapped. Now you have to actually go and rewrite all your upstream dependencies. Each library, infra integration, algorithm, data structure, etc.
Then, you gotta hire new people when those leave, and the rust hiring ecosystem is also just not there yet.
I worked mostly in robotics. Literally everything is cpp. All the grad students know it. Ros is there setting the mental models for better or worse. The list goes on.
You have really strong points on hiring, but the notion that you need to rewrite all your dependencies pretty weak. Nearly every modern language includes facilities for writing findings and I do believe that there are Rust to C++ binding tools.
This is true in theory, but in practice is a nightmare.
YMMV, but even small bits of python, Go, or (yes) Rust that have crept into the robotics stacks at the various places I've worked have created problems for incoming new-hires, or for maintenance even for senior folks. Python less so than others, but python is challenging to deploy on vehicle, b/c of various hard and soft problems.
In particular, Rust interop with CPP is poor. Should I recompile all ros packages to allow me to call a few things in Rust? Not at this time.
imo the issue is that doing academic training in robotics means doing whatever your advisor tells you to do and playing the academic rat-race game OR working in industry churning out features as fast as possible to keep your company alive, which essentially means that you are not going to be able to spend any meaningful time learning how to do FFI in Rust or Python or whatever. I think that's the real reason robotics software is in the dark ages. But then again, I'm not in the field and this is an armchair take. So yeah.
Case in point, until Rust is fully bootstrapped, even in an ideal world C++ would still be around.
Then there are all those industry standards whose definitions are only available in C, and eventually C++.
Likewise when I need to plug into JVM, CLR, V8, ART runtimes, I am reaching for C++, no need to introduce another layer into the sandwich, in terms of build tools, IDE tooling and stuff to debug.
Not familiar with "boosting", but I'm definitely a fan of concepts and reflection.
Reflection is absolutely gonna feel completely alien to people for a while, but there's a lot of areas in our codebase where I wish I could simply describe a data layout and have the efficient code generated for me instead of writing tons of boilerplate. Take JSON serialization for example. Currently, you have to write your (de)serialization by hand, but with the new reflection stuff one could do it based on a struct's members, and with less error. It'll be wonderful for writing new libraries that will make our lives easier.
Thanks for your feedback! It would be great to have reflection in C++,I did not know it was on the radar! Is there a compiler that supports it currently?
That sounds like a good thing to me. I would prefer the people who hold power in our society be the ones democratically elected, not the ones who lucked into a leviathan amount of money and can now buy Twitter for fun.
Separation of powers is a very important reason not to limit the power to just politicians. Just like how we would prefer not to have media moguls also be politicians, leaving control of the media to just the government is a bad thing for democracy as it just compounds power towards the existing holders.
I'm open to being convinced otherwise, but I feel like I'm the only one who doesn't get why opt-out telemetry is such a big deal.
Sure, if it's a software library, I don't want it doing random network calls during my runtime. That's just rude.
But if it's a user application (including a compiler), I don't see what the fuss is about. Of all the myriad of ways our data is harvested every single day, telemetry seems very unhelpful to advertisers and hackers, but very helpful to the people whose job it is to make the software you use better. I'd love to help them make the software I use better
Where is that line for you? Is occasionally checking for security updates strictly necessary? Is reporting a crash to the devs so they can fix it necessary? What about sending system & usage telemetry so they can prevent future bugs?
For me, it's like GP said: Absolutely no unauthorized network traffic unless strictly required for the purpose of the software (e.g. curl). No security updates, crash reporting, telemetry unless you prompt the user and show the user exactly what will be sent (similar to how syncthing does it).
Anything less is voyeurism.*
* extreme language I know, but it's precisely how I feel about these acts.
A program that does not connect to the network at all today can also start shipping off your ssh keys tomorrow. Anything can always be added or changed.
OP is obviously talking about people whose area of research/development/product would involve web scraping... This feels like being purposefully obtuse
"face scan" doesn't imply "3d face scan", it just implies additional data beyond a regular photograph. Isn't a 3d scan just a bunch of 2d scans lumped together? How many photos need to be taken before it's okay to call it a "scan"?
> How many photos need to be taken before it's okay to call it a "scan"?
I think that's actually a really interesting question.
To me, intuitively, it's pretty clear that if you took 100 photos of someone's head being rotated 1° each time, to put together a model of their face, that's a scan.
It's also clear that, intuitively, just 2 photos, or even 4 photos at 25° each, is not a scan. They're just a few individual photos.
All of which feels pretty analogous to, how many pixels does a bitmap need in both dimensions to call it a photo?
A 4x4 bitmap is not a photo. While a 100x100 bitmap certainly is, even if we'd call it thumbnail size. But we'd all agree it's a photo.
So where's the transition? I'd suggest a value of 20 is kind of a gray area threshold. A 20x20 image maybe you can say is starting to turn into a photo. Similarly, a collection of 20 images taken at regularly spaced angles maybe you can say is starting to turn into a 3D scan.
reply