I just ask the interviewers how much freedom they have in the interviewing process (usually it's pretty clear) and whether they would do something different or not if they could. This kind of follows naturally from questions about non-dev responsibilities that devs have. A lot of fellow engineers at my current company believe their little quizzes actually have a lot of signal value when they really don't, I avoid working with them since they tend to have other weird beliefs. If my interviewer is giving me a whiteboard problem somewhat reluctantly because it's what he's supposed to do rather than what he'd like to do, he's probably OK to work with. The followup question is then whether I'd be working with him or not, since so many of these companies just throw you in an interview loop with whoever is available that day...
Surface Pro / Surface Book. I presume Linux support on them is good enough now, but at least if you're stuck with Windows you can get a proper bash environment now without cygwin that I hear works well.
Maybe some fingers of the government are demanding a backdoor, but other fingers of the government want tor to be as secure as possible. Tor is useful for the operations of national intelligence agents, too, and can only maintain that use when they know it doesn't have a backdoor which say the Chinese could then discover and exploit.
Is it possible that maybe the US military have gone on to alternative methods of hiding their identity. If the NSA can mandate backdoor access to every data centre in the country then couldn't they work with the military to use those backdoors to hide their own identities. To an observer it looks like someone is accessing Google but in actuality it's a CIA field agent sending top secret information to the Pentagon using a backdoor in a Google data centre.
How do you know the stuff at the gas station is actually 91 and not 87, how do you know the regulators do their jobs properly and aren't bought out by Big Oil?
The answer to all questions is that random private individuals or groups do the tests. If it's a big enough concern among their customers, some of them will do it. Some customers at regular gas stations already do these tests on their own because not all gas stations are the same. (A few people even make their own gas.)
> (you can use GCC but the code quality wouldn't be that good)
Maybe.. on the other hand you can actually use the full program space available, and gdb. (Last time I used Keil was 2011, I quickly moved to a GCC toolchain rather than pay for a license. They're really not selling me on their tool when they arbitrarily restrict my max program size such that I can't actually evaluate it properly.)
A culture of many small changes means that you deal with smaller problems relatively quickly. The more you fall behind, the bigger the jump to where you should be, and it's not a linear relationship.
At one place where I work, we're on nodejs 0.10, which is several release versions behind. It's causing us a bunch of problems, because while 0.10 is still technically not EOL'd yet, npm modules behave like it is... however we've left it so long, that the jump to current stable is a giant task, which we don't have the time for given other business reqs.
Tests indeed don't guarantee safety, but lots of small changes are easier to deal with than the occasional massive change. It's also the basic concept behind version control.
This is my experience as well with node and shrinkwrap. I see people using shrinkwrap to avoid potential issues, but what ends up happening is they get stuck on old versions of dependencies and when there's a bug fix or new feature that's needed it can be very difficult to upgrade. Instead, I prefer to try to always keep my dependencies up to date, especially with new major versions to avoid exactly this problem.
Do you think that, in case of the problem from this article, Perl devs should have had a test checking if their update doesn't break someone's Emacs when they try to use it in client-server mode, launching one via a Perl script and other via some other means, on a Linux with "capabilities" feature?
This story wasn't about trivial day-to-day developer bugs, but what kind of problems happen in really complex systems.
Skimmed the PDF, it seems like the point of this paper is just to show that yes, classical systems can (with memory, e.g. the reference https://arxiv.org/abs/1007.3650) simulate quantum ones, and thus finding "characteristic quantum numbers" shouldn't make one immediately suspect something quantum is going on. In the discussion it's like the ultimate nitpick: "The characteristic trait
of QT rely on the fact that the quantum bounds are
achieved without employing extra resources such as memory. Therefore, the principles needed to fully derive
QT (in the spirit of Refs. [33–37]) should account for that."
There are some people who find the concept of quantum physics philosophically displeasing and try however they can to ignore all the experimental evidence and say we're really in a classical universe. This isn't a case of one of those.
You can do that with some proprietary software too -- proprietary doesn't have to mean no source code and no source code modification, unfortunately it has come to mean that for too many businesses and users willing to pay money to those businesses, so yeah, it's a point in favor of open source. I also appreciate that contributing to and encouraging use of open source has a societal benefit of training my eventual replacements for free, and open source of a particular license guarantees improvements to that software from other parties than the primary maintainer remain open.