Hacker News new | past | comments | ask | show | jobs | submit | overgard's comments login

My guess would be that for non-technical people it's much easier to schedule visible work (frontend) vs invisible work (backend). Are the people that are managing you less technical possibly? If you have a good technical lead they should probably understand the work to give you, but PM's and scrum masters frequently don't understand.

I guess the other thing I'd ask is, why the strong resistance to frontend work? I get not seeking jobs for it or not preferring it, I don't seek those either, but for my own personal productivity I like to have a decent idea how to build a frontend in case I'm waiting on someone that's overloaded in order to deliver my feature etc. It can be helpful for a career if you don't have to frequently tell people "I can't do that", vs, "it's not my strong suit but I can pitch in"


> why the strong resistance to frontend work?

I am bad at it. So they get an under-performing employee with low morale and I get complaints about my work.

> It can be helpful for a career if you don't have to frequently tell people "I can't do that", vs, "it's not my strong suit but I can pitch in"

I usually do the latter, but then in December I thought I'd better make sure, reasoning that things go better the clearer I am ahead of time. That was the first time I tried saying, "Seriously, I can't do that kind of work." This time I got nothing but that kind of work.


I feel like squash-merge accomplishes what I want (easy revertibility and simpler history). I'll admit I need to learn it better, but every time I've tried to use rebase I feel like I'm playing with fire and I end up doing silly things like making a backup of the whole folder. I just think the process is unintuitive and somewhat scary, and I don't get the net gain. Like, outside of looking at the last couple days is anyone really spelunking into history that much anyway?


I usually turn it off to save disk space. I dunno, with 32GB of RAM I just dont see how the OS should run out of memory, and I don't really want Windows burning out my SSD


I'm not sure Rust is in the same space as C++, so I'm not sure I'd call it a better C++.

I use C++ because I want performance without all the accounting operations that have to happen in C. I try to write my code in a safe way, but language safety just isn't a priority for me. I guess that's why I'm not using Rust. Fighting the borrow checker doesn't seem worth it to me to gain something that's just not that much of a priority. But I'm not writing an OS or a server.

I'd think of Rust more like a better Ada or some language where safety is first priority


Story time: Chrome is a Frankenstein of C and C++ code (both directly and through called libraries, static or dynamic). Now C++ has `std::string` obviously. C of course has `const char *`. To facilitate interoperability C++ has various methods to implicitly or explicitly convert between the two.

At one point it was found these every keypress on the OmniBar resulted in 25,000 string copies.

So my point is that you can write C++ as carefully as you can but on any sufficiently complex code base you'll going to need a pointer to something and then you've really lost all control and safety so the safety in C++ is a bit of an illusion.



Well, that's horrifying. Presumably the value is being copied whenever it is converted from a const char * to a std::string?

The right thing (TM) would probably be to refactor some of the std::string's into std::string_view's, for instance by adding overloads where it makes sense. I doubt you could avoid all the copies, but I think you could cut it down substantially if you collected metrics and focused on the most egregious cases.

Of course, I do not envy the person who is tasked with doing that, and I could be wrong for any number of arcane technical reasons.


I can only imagine that using std::string_view in a massive, complex application would be horrifying.

The borrow checker is one of the benefits of Rust in that case, simply because you can avoid copies while actually being able to trust that you're not opening the door to security & maintenance hell in the process.


Fun fact: this happened during the standardization of std::string_view, which ended up landing in C++17, a few years after this was fixed. Not that there weren't non-standard versions that existed.

In the end they did a number of things to fix this, the patches are linked in the story I linked above.

std::string_view has seemed, to this relative outsider, to be semi-controversial. It makes it pretty easy to introduce use after frees, that it's not null terminated can be easy to forget, and isn't bounds checked. So while it helps with some issues, it can create others, meaning it's not always a clear win.


a lot of the copies might be bits of code defensively copying the string because they don't want to deal with working out lifetime safety.


Fair but I never said it was safe, just that perfect safety isnt of interest to me


I don't really see the distinction you're making. C++ will still be around for legacy projects that don't want to switch, but I can't think of a new project that would benefit from picking C++ over Rust, apart from needing to integrate with the C++ ecosystem beyond what bindgen offers.

You get to write code that's just as or more performant, and you get to jettison the decades of bad language design that is C++, which is win/win to me.


The pool of C++ engineers is much bigger, and like it or not its a very proven solution. Rust still seems to have a lot of areas where its immature (gui toolkits for example)


Maybe because Rust is not a language made to make GUI applications? For GUI applications you are better of with a higher level language.


Don't you think that's a weird limitation though? Nobody says C++ isn't a language made to make GUI applications. If you're going to replace C++ you should probably be able to do the same things at least as well.


It's not actually a limitation, the ecosystem is just young:

https://github.com/iced-rs/iced

My general point though is that there's still some ecosystem/legacy reasons to stick with C++, but Rust very much fills the same niches as C++.


It's not "language safety", it's memory safety. You're saying that memory safety isn't a priority?

When you run Coverity or other static analyzer on your code, how many violations do you find? How many were unexpected?

The borrow checker is always being run - the difference is whether its in the compiler (Rust) or you in your head (C++). And I trust the compiler much more than I trust myself...


For what I use C++ for (game development and embedded systems), smart pointers are "good enough". Especially single player games, worst you run into is a crash, or my arduino code writes past a buffer.


If you mean WSL for containers, macOS needs a VM too. If youre doing C++ macOS dev tools are .. bleak. Great for webdev though


↑ This!

I would love to buy Apple hardware, but not from Apple. I mean: M2 13 inch notebook with access to swap/extend memory and storage, regular US keyboard layout and proper desktop Linux (Debian, Alpine, Mint, PopOS!, Fedora Cinamon) or windows. MacOS and the Apple eco system just gets in your way when you're just trying to maintain a multi-platform C++/Java/Rust code base.


WSL for normal stuff. My co-worker is on Windows and had to setup WSL to get a linter working with VS Code. It took him a week to get it working the first time, and it breaks periodically, so he needs to do it all over again every few months.


I'm developing on Windows for Windows, Linux, Android, and web, including C, Go, Java, TSQL and MSSQL management. I do not necessarily need WSL except for C. SSH is built directly into the Windows terminal and is fully scriptable in PS.

WSL is also nice for Bash scripting, but it's not necessary.

It is a check box in the "Add Features" panel. There is nothing to install or setup. Certainly not for linting, unless, again, you're using a Linux tool chain.

But if you are, just check the box. No setup beyond VS Code, bashrc, vimrc, and your tool chain. Same as you would do on Mac.

If anything, all the Mac specific quirks make setting up the Linux tool chains much harder. At least on WSL the entire directory structure matches Linux out of the box. The tool chains just work.

While some of the documentation is in its infancy, the workflow and versatility of cross platform development on Windows, I think, is unmatched.


This. I have to onboard a lot of students to our analysis toolchain (Nuclear Physics, ROOT based, C++). 10 years ago I prayed that the student has a Mac, because it was so easy. Now I pray they have Windows, because of WSL. The tool chain is all compiled from source. Pretty much every major version, but often also minor versions, of macos break the compilation of ROOT. I had several release upgrades of Ubuntu that only required a recompile, if that, and it always worked.


Unless he is doing Linux development in the first place, that sounds very weird. You most certainly don't need to set up WSL to lint Python or say JS in VSCode on Windows.


That sounds wild, you can run bash and unix utils on windows with minimal fuss without WSL. Unless that linter truly needed linux (and i mean, vscode extensions are typescript..) that sounds like overkill


Don't you need Cygwin or Git Bash if you don't use WSL? That's kind of fussy.


As Windows/UNIX developer, I only use WSL for Linux containers.


Pretty neat, but is there any Windows or MacOS support planned? I wouldn't mind renting out my GPU when it's idle, but I don't really want to go through the process of dual booting etc.


Windows support is definitely something we are thinking about.


What are you doing GPUs that requires Windows or Mac support?


Video games


Have you looked at Nvidia GeForce NOW? It's like $10/mo for a pretty decent streaming gaming rig. I'm very happy with it - I don't have to deal with Windows and can play AAA games on my Macbook Pro at 60Hz (1080p).


> on my Macbook Pro at 60Hz (1080p).

I think you just answered yourself. Some of us like to play games at 4K at 80Hz+, with no subscription fees, no internet bandwidth requirements, no added latency, and ability to mod.


Yeah, but that has nothing to do with the context of the question. Someone is specifically asking about accessing a GPU over the internet for video games.


Nowadays it's rare that a Windows-only game that I want to run doesn't run flawlessly on my Linux machine (through Proton/Wine). I wouldn't recommend going outside of Steam though unless you're willing to do some troubleshooting.


Try running COD Warzone on Linux and not get quarantined to cheater-suspect lobbies.


Ah yes, apologies that's an important caveat I forgot to include.

Games that use anti-cheat are a mess on Linux. I don't play any of those games, and if you do then you're likely to run into some trouble with Linux-only.


Yeah anti cheat sucks overall tbh. I loathe giving kernel level access to a random video game’s anti cheat system.

Short of dedicated hardware (Xbox/ps) I’m not sure what else could be done.


So what are you looking for, an RDP session with an attached GPU?


I'm not generally for singling out a person and slinging mud at them, but, I also feel like unless there's a real social cost to acting the way these parasitic executives act, there's little incentive for them to change their behavior. There should be a sense of shame in ruining a once good product for career benefits and short term growth. I think the tone is appropriate in that it conveys that this is not a good-faith effort gone wrong, but rather an executive acting in a cynical and reprehensible way.


I switched to DuckDuckGo recently, partially because google results don't seem very good anymore, but also very much because of how hostile it is towards VPNs. I like having a VPN when I'm travelling or on a public wifi as a sensible precaution, but Google constantly forces me through irritating "prove you're human" puzzles.


I believe they're refering to the big rip: https://en.wikipedia.org/wiki/Big_Rip


That wikipedia article might be the worst physics article that I've read on WP.


Which edits did you add to improve it?


Have you ever tried contributing to a Wikipedia article? It's a nightmare due to the other contributors who'll revert all your changes and fight you on everything and it's not worth the effort.


Yes, I've been contributing for two decades as a drive-by editor. I'm occasionally reverted; I suck it up, I don't own these articles. I don't get into fights. I avoid editing articles about politics, especially about nationalist hot buttons. I also move gingerly around some history articles, which are often patrolled by fanatical revisionists.

I don't get nightmares.


How well do you know the subject? I will deal with it for you.

You can post a diff and I will find someone who will get it in.


None. I know nothing about the Big Rip. Judging from the (brief) talk page, the article doesn't get a lot of love from editors.


Outside of tiny languages though, pretty much nobody knows the "whole" language. Almost every production language has a ton of quirks that can surprise people who have been using a language for years.


And frankly large numbers of features in languages aren't needed by most devs. Marshalling in C#, for example, isn't needed unless you're calling into native code. Making raw syscalls in Rust isn't needed unless you're working on optimizing some low level systems code.


Expression trees in C# is also an example of this.

I have 10+ years experience (and consider myself highly proficient) in C#, but my interaction with expression trees is mere hours where I've only once or twice needed to solve a very specific small problem which I could do with a small tweak to code copied from Stack Overflow.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: