Hacker News new | past | comments | ask | show | jobs | submit login
I found a bug in Intel Skylake processors (inria.fr)
587 points by testcross on July 3, 2017 | hide | past | favorite | 99 comments



One amusing thing about this epic tale is Serious Industrial OCaml User disregarded direct, relatively easy to implement, very sound advice from Xavier Leroy about how to debug their system! I would like to think that, were I in a similar situation being advised by an expert of that calibre, I would at least humor his suggestions.

Why seek the expert if not for his advice? It brings to mind people disregarding doctors who give them inconvenient medical advice.


Not saying this is what happened in this circumstance, but even an expert has to explain the 'why' of things. "Disable HT and try again" comes with a hint of "please go away", whereas "disable HT so we can eliminate a potential cause" smells more like "I really want to understand this thing."


I know nothing of OCaml culture or why the author is deemed worthy of having his name italicized, but the doctor comparison is upsetting. If the guy who wrote this:

I was tired of this problem, didn't know how to report those things (Intel doesn't have a public issue tracker like the rest of us), and suspected it was a problem with the specific machines at SIOU (e.g. a batch of flaky chips that got put in the wrong speed bin by accident).

were a doctor, he'd be guilty of malpractice. This bug went unreported eight months longer than it needed to. Am I misreading all this somehow?


His name is italicized because he is the primary author of OCaml (and a plethora of other great tools, like CompCert, the first fully-verified compiler). Overall, an exceedingly competent and productive programmer and scientist.

The doctor metaphor isn't perfect; what I was going for is, when you are seeking out an expert's advice and you ignore it, why do you go to see the expert in the first place?


Thanks, that makes sense. I don't disagree about taking expert advice! I was just really disappointed to read the part of the article where the author doesn't bother to figure out how to navigate Intel support and report the bug.


Bug reports are gifts from inconvenienced users to vendors who have just done free work on the vendors behalf. There is absolutely no obligation on the part of a user to report bugs.


I am shocked by this attitude. Maybe I shouldn't be, and I'll certainly give this all some thought.

There's at least one obvious error in your statement: if an inconvenienced user's bug report results in less downtime for other users, it is a "gift" to other users, as well as a "gift" to the vendor.

But it says something about our profession if we regard putting flags down to mark the landmines we find a mere courtesy (a gift!) instead of an obligation. I guess that's a debate for a different time and place.


Well, guess what: you're as entitled to your opinion as I am to mine.

But just like a user has no obligation to 'mark the landmines' for vendors they also have no such obligation towards other users. They do have a right to receiving bug free software in the first place, alas our industry is utterly incapable of doing so which has lowered our expectations to the point where you feel that we have an actual obligation as users to become part of the debugging process.

That is not going to make our lives better.

What will make our lives better is if software producers accept liability for their crap they put out and if they were unable to opt-out of such liability through their software licenses and other legal trickery.

You're just a small step away from making it an obligation rather than an optional thing for users to report bugs, the only difference is that for you the obligation is a moral one rather than a legal one. I really do not subscribe to that, when I pay for something I expect it to work and I expect the vendor (and definitely not the other users) to work as hard as they can to find and fix bugs before the users do.

But we're 'moving fast and breaking shit' in the name of progress and part of that appears to extend to being in perpetual beta test mode. That's not how software should be built and I refuse to subscribe to this new world order where the end user is also the Guinea pig.

Keep in mind that users have their own work to do, are not on the payroll of the vendors usually have forked over cold hard cash in order to be able to use the code (ok, not in the case of open source) and tend to be less knowledgeable about this stuff than the vendors. They really should not have a role in this other than that they may - at their option - upgrade their software from time to time when told very explicitly what the changes are (and hopefully without pulling in a boatload of things that are good for the vendor but not for them).


> You're just a small step away from making it an obligation rather than an optional thing for users to report bugs, the only difference is that for you the obligation is a moral one rather than a legal one.

I'd argue that a person does have that obligation in some circumstances, yes. And yes, I am thinking in moral rather than legal terms. The legal picture is pretty far outside my expertise, and the professional ethics of software engineering (which would in turn inform the legal picture) seems to be woefully opt-in. As you say, 'moving fast and breaking shit,' perpetual beta test mode, etc. So I'd put the legal stuff aside for now.

For me, the key is that "user" is a deceptive term here. A mere user cannot point to a small piece inside a much larger machine and say "that will blow up occasionally, and I know exactly when." We are talking about engineers. Or at least, I was thinking of the professional obligations of engineers - on the user side of the fence and the vendor side of the fence - and that was informing my comments.

> Keep in mind that users have their own work to do, are not on the payroll of the vendors usually have forked over cold hard cash in order to be able to use the code (ok, not in the case of open source) and tend to be less knowledgeable about this stuff than the vendors.

Yeah, and I don't think I disagree with you in the "user" case. I really think a software engineer finding a CPU bug is a different case. It seems me that if we're in possession of knowledge of something as serious and wide-reaching as a CPU bug, we have a reproducible test case, and we don't do anything with it (I mean, at least a tweet or something, for the love of God) we are part of the problem with our profession.


I'm on board with a reporting duty if such a thing will always result in:

(1) a payment from the vendor to the reporter compensating them for time and effort spent at getting the bug to be reproduced

and, crucially,

(2) a requirement for all vendors of software and hardware to timely respond to bug reports and to have a standardized reporting process.

In that case I can see how such a shared responsibility would work, but as it is the companies get the benefits and the users get the hardship with a good portion of reported bugs (sometimes including a solution) that go unfixed, that's not a fair situation.

Case in point: I've reported quite a few bugs to vendors over the years but I've stopped doing it because in general vendors simply don't care, most of the time bug reports seem to result in a 'wont fix' or 'here is a paid upgrade for you with your fix in it'.


The security guys seem to be converging on a way of managing these - the compensation of the person reporting the bug and the factors motivating vendors to respond to the bug report in a timely fashion or suffer consequences - with bug bounties. Intel should make it easy to report this stuff, but if everyone understood that finding something genuinely interesting resulted in a serious payday, nobody would skip making the call.

The difference between something like Google's bug bounty (capped at over $30k, I think) and a hypothetical bounty for Intel is, well, Intel has a lot more at stake. It's honestly strange that they don't have something in place already. Something like Skylake costs on the order of billions to get out there. It's cool that this Skylake bug was fixable via microcode, but the Pentium FPU bug back in the day cost them half a billion dollars. If such a bug exists, that is the kind of thing Intel should want to have reported as soon as humanly possible. Even the reputational damage they take from something milder like the Skylake bug would justify a bounty system with very serious payouts.


I am genuinely wondering why you think this way? I do not expect my users to report failures. If they do I am more than pleased (not sure I would call it a "gift") because it shows that my product is good enough that they won't just leave out of frustration.

As technologists, we develop tools and services to capture bugs (both server side and client) so that we gain more insight into how our product operates. A user that takes time to give a well crafted bug report is rare. Most of the time it tends to be legitimate gripe that a feature isn't working.

Update: After writing I am re-reading your comment and now thinking we are on same page.


> figure out how to navigate Intel support

Do you have a step-by-step guide for doing that or are you just assuming that there must be some magic way?


Since we're communicating via rhetorical questions, are you asserting that it's literally impossible to contact Intel support with a defect, or merely tedious? Does it become easier or more difficult if you have a reproducible test case and the sort of connections a prominent FP compiler person has?


I'm trusting the account of the prominent FP compiler person who had a test case and wrote in the linked article that he found no way to submit the issue. You're the one saying this cannot be true, but you're not saying why you think so.


1. That's not what I'm saying

2. That's not what he wrote


No he wouldn't. Doctors are not obligated to write up case studies or submit them to medical publications.


It's a bad analogy, yes. I'm not sure who the people who suffered intermittent system problems for months longer than they otherwise would have, had he reported the problems when he found them, are under the doctor analogy - not patients under his treatment, certainly.

> Doctors are not obligated to write up case studies or submit them to medical publications.

That's true, although I think we'd all agree that a person who has the knowledge to create a lifesaving treatment for a disease and doesn't bother writing it down because, well, writing is boring, is behaving rather unethically.

But this is merely computer science, not medicine.


A comp.arch poster said:

> The errata refers to the problem showing up on short loops of less than 64 instructions that use AH, BH, CH or DH.

> Looking at the Skylake microarch, the instruction decode queue is 128 uOps thread, 2*64 uOps when threaded. The Loop Stream Detector "can stream the same sequence of µOPs directly from the IDQ continuously without any additional fetching, decoding, or utilizing additional caches or resources." ... "capable of detecting loops up to 64 µOPs per thread". https://en.wikichip.org/wiki/intel/microarchitectures/skylak...

> So maybe the microcode update just shuts off the loopback detector.

https://groups.google.com/d/msg/comp.arch/UkO4Z2FT18c/7YlC0a...

So if the bug is in the loop-detector, and the patch possibly disables it rather than fixes it, then does anyone have any before-and-after performance stats?


IIRC the Skylake loop buffer is not any faster than the uop cache, instead the reason for it's existence is to save power by not touching the cache. So you'd have to test power consumption instead?


>> Looking at the Skylake microarch, the instruction decode queue is 128 uOps thread, 2x64 uOps when threaded.

No. Skylake does not have 128 μops with HT disabled. Skylake indeed was a big jump from Broadwell where the loopback buffer has 56 entries, 28 per hyperthread or 56 with HT off. Skylake has 64 μops per thread, HT on or off.

64 μops is a lot.


> I worked from the executable provided by SIOU, first interactively under GDB (but it nearly drove me crazy, as I had to wait sometimes one hour to trigger the crash again), then using a little OCaml script that ran the program 1000 times and saved the core dumps produced at every crash.

rr can often be a time-saver in situations by providing deterministic replays up to the point of a crash, whereas coredump analysis is a single retrospective snapshot.

http://rr-project.org/


As much as I love RR, I am not sure it would've helped here, as the bug requires multiple threads to concurrently run? Also, RR is based on achieving deterministic replay IIRC, so I am not sure it'd be the first choice for a nondeterministic hardware bug?


You can do multiple concurrent rr recordings. A non-deterministic cpu bug would generally cause a divergence in the recording vs replay, so rr would be a decent way to go about this. The way I'd have probably used rr when faced with this is to bisect the recording to find which code is responsible.


rr wouldn't help in this case. From the docs, it "emulates a single-core machine. So, parallel programs incur the slowdown of running on a single core." The skylake bug only occurred under heavily threaded loads.


There's also UndoDB if you can pay for it. Not sure what the differences are between it and RR, but I know with UndoDB you can create a binary for the client that has the recorder built in, so it can automatically record failing states.


Replaying the recorded code under either rr or UndoDB would have revealed that something odd was going on in the CPU when the recording replayed differently depending on whether HT was enabled. Obviously finding this would have required that the tester replayed the recording multiple times with and without HT enabled


How will emulation trigger a hardware bug?


rr doesn't do any emulation. It records the inputs to a program so that it could be run again deterministically the same way again. However it doesn't actually run code as multi-threaded so it wouldn't be a help here.



I've worked so far away from the metal for such a long time but I still find these types of articles so interesting even though I only understand a small fraction of the info. Its amazing to think the levels of abstraction which are in place from the code at this level which make my work possible.


How does this bug affect current hardware in production? Is it worth waiting for the fix before buying the new MBPs, for example?


EDIT: my comment was wrong. Thanks. There is a microcode update. Install it and you'll be fine. On macOS it should be installed automatically, according to https://support.apple.com/en-in/HT201518 (although that page doesn't say whether it's available or not)


The article states that Kaby Lake is affected, too. Maybe except not the Kaby Lake versions used in MBPs?


I'd say no, you likely won't run into trouble because of this


This is subjective and very anecdotal, so take with a heap of salt:

My 2016 Skylake MBP used to crash very regularly when waking it up from suspend (sometimes multiple times a day). When I first heard about this issue a week ago, I used XCode Instruments to disable Hypterthreading.

I have not observed a single crash since.


Also anecdotal but I am experiencing no crashes with HT off on my 2016 MBPr. Be sure to turn HT off after a sleep/wake cycle as it gets reset each time.


Did you get a lot of crashes before?


GCC generates code that's smaller but it isn't optimal, because of the potential for partial register stalls (and just overall register renaming issues)

It's of course not wrong, but using AH when you're dealing with RAX is a weird anachronism

Clang does the obvious, correct thing.


"smaller but not optimal" is not really true. It depends on what you're optimising for. In my experience, optimising for size overall, and then speed in the really performance-critical parts (with some expected expansion), gives the best results. Even the non-performance-critical code will have a noticeable effect if its larger size causes more cache misses.

Making use of the "partial registers" (I see them more as separate smaller registers that can be grouped together) effectively can avoid many more instructions.


I don't disagree with you, but then use EAX, not AH (which would also produce a smaller code as you don't need to use a 64-bit constant)

Optimizing for size is good, but what GCC did there made sense in the 32-bit days, but not that much today

Some code snippets use AH/AL as 2 separate registers hence the processor might rename them to different internal registers. But then when reading EAX the processor needs to update EAX accordingly as well.


TFA says clang actually uses an encoding of AND which operates on RAX but takes only 32b of constant so it is pretty much the solution you propose. And BTW, operating directly on EAX itself fucks up the upper half of RAX.


AFAIK it is standard for all 64-bit instructions that use immediates.


Interesting, it seems you are right and no 64b immediates are possible at all except in MOV.


AH/BH/CH/DH are so seldom useful that it's not really worth adding support for using them to the compiler. They only exist for 4 of the 16 registers, and they only let you address one random byte of the 8 bytes in that register. You get a bit slower code and a more complicated register allocator in order to save, what, a few bytes in the entire program?


AH is no more of an anachronism in 64-bit code than it was in 32-bit code; AL, AH, AX, EAX, RAX; all bit slices of the same register. The addition of RAX doesn't change the fact that AH is still a bit weird (it's the only one that doesn't include the low bits of the total register).


Do you know for a fact that Clang's code is faster here, i.e., have you measured it on actual hardware? Armchair performance estimation about "the potential" for something is often wrong...


In a micro benchmark, using the clang version appears to be a bit faster. However, note that code size can also become a relevant concern in larger code bases: Apple, for example, is said to generally build its OS code with -Os rather than -O2 or -O3 (and had -Oz as an even more aggressive custom code size optimization added to gcc back when they still were using it instead of clang).


It might not, but the GCC code has a potential issue in how it does things https://stackoverflow.com/a/41574531 (curiously the question is about GCC not doing it, apparently not always)


It's a performance trade off. It's not really surprising that GCC would make different performance trade offs when optimizing for different microarchitectures. (let alone most likely across different versions of GCC)

Why would you expect GCC to optimize for Pentium 4 (Netburst) in this day and age? (Especially given that the article is talking about Skylake.)


I don't expect it to optimize to P4, are you talking about this? (it's one of the answers to that answer)

> The quoted delay of 5 - 6 clocks is much better on later microarchitectures. For example from Sandy Bridge and Ivy Bridge, The Ivy Bridge inserts an extra μop only in the case where a high 8-bit register (AH, BH, CH, DH) has been modified

And you see that even on later architectures the point of avoiding AH makes sense (which is the opposite of what that GCC code does)


It's not obvious. If it was then GCC would not do it. Please don't use the word 'obvious' when it is to you but not to the majority of population.


It is obvious, though I meant "obvious and correct" not "obviously correct"


Have you looked at the actual GCC codebase? It's very easy to say something is obvious when you're looking at a problem which someone else has nicely isolated; it's much harder to dive into a complex codebase which has a very wide support matrix and say it's worth the effort to change working code instead of so many other things.

More bluntly, before now wouldn't most people have said it was “obvious” that Intel would support their own documented features?


The code produced by Clang is a direct translation of the C code. That's the obvious part of it

Most people here are not familiar with x86 assembly and its caveats it seems. Reading the Intel and AMD optimization manuals might be a good start

(and yes, the bug is not in GCC it's on the Intel processor)


> Most people here are not familiar with x86 assembly and its caveats it seems. Reading the Intel and AMD optimization manuals might be a good start.

Please don't make unsupported assertions that everyone but you is speaking out ignorance. It doesn't add anything to the conversation, especially when dealing with older codebases unless you can prove that this is and never has been the correct way to write that code. Otherwise it's just another way to say “CPU optimizations change over time and an open-source project doesn't have a team of experts tracking microbenchmarks to decide when to switch”.


Should be easy to prove me wrong if it's as easy as you say, I don't see why you're so annoyed by it

> “CPU optimizations change over time and an open-source project doesn't have a team of experts tracking microbenchmarks to decide when to switch”

Yes, that's what's happening, but the problem comes from the P6 architecture (though Netburst doesn't have those problems), this problem has been known for around 20 years


I suppose the choice of word was to describe what code a human would intuitively/idomatically write for this situation. Whereas compiler passes are often wont to sum up to unintuitive/non-obvious code that is "correct" in that it's a valid lowering of the higher level code but not likely what a human would've written. This can sometimes be a pro or a con in terms of the resulting code's performance.


Who can clarify? Google says: Qbvious - easily perceived or understood; clear, self-evident, or apparent.


The confusion is between "the assembly clang generates is the obviously correct assembly for this operation" (true) and "the obviously correct behaviour for a compiler is to generate the assembly that clang does" (disputable, as while the assembly GCC generates is less clear, there may be advantages to doing it that way in e.g. performance). "Clang does the obvious, correct thing." is ambiguous between these two meanings.


I disagree with "ambiguous", you sneakily changed "obvious" to "obviously" in the second sentence.

  obvious, correct != obviously correct


Sounds like by "obvious" the parent means that Clang uses the most trivial approach.


> Clang does the obvious, correct thing.

Can we all stop shitting on GCC all the time? Thanks.



I'm very upset that we didn't get the story behind #10.


Can this be exploited for malicious code?


It's "unpredictable" what happens, so I think the best you're going to do is a DoS. I.e. if you could get the JS JIT in a browser to generate code like this and execute it repeatedly, you could crash a machine just by visiting a site.


It also is "unpredictable" what happens when you overrun a stack frame.

There is zero guarantee that the infamous Sufficiently Sophisticated Attacker couldn't predict it. Hardware is largely deterministic, even when it doesn't behave in the documented way. I wouldn't interpret this as literally unpredictable, it's just a generic slogan they always use in their errata. And they aren't going to say anything more for obvious reasons.

Patch this damn microcode.


In software systems you can nudge many similar situations to give you control over what badness happens when you drive the system off the rails of the invariants. No reason why things anogous to nop sleds, heap spraying etc would not be applicable here.


> That would not be the first time that GCC treats undefined behaviors in the least possibly helpful way,

Oh compilers. Like VC++6.0 initializing uninitialized memory to 0xCDCDCDCD in DEBUG.


0xCD is the interrupt instruction on X86. If you tried to execute out-of-bounds memory, you get a trap right away (handily, the instruction is one byte long).

http://www.mathemainzel.info/files/x86asmref.html#int

Before Intel processors had execute protection, this was a good way to catch bugs in your buggy C bugs. I mean programs.


It was a "good" way to catch a certain class of bugs on X86, while introducing a different class of (arguably) harder to catch bugs.

With VS .NET (VC7.0), the C++ compiler no longer defaulted uninitialized memory in debug builds.

Our software didn't just run on x86 (windows/linux), but also SGI MIPS (Irix), VAX (VMS), DEC Alpha (Tru64), AND PowerPC (VxWorks), so across all those compilers, we ended up having portable and robust code.


Err, that's on purpose? It's so you can tell your writes apart, which is helpful while debugging.

They used to use, uh, more obvious patterns but the PC brigade called them on it so they settled on 0xcd.


0xDEADBEEF how I miss you.

Yea, I knew it was on purpose, but it had the unintended consequence of masking bugs that would only show up in release builds.


Um, what was it? You can paste in decimal to avoid triggering the PC brigade ;)


Wikipedia has a list that includes some used values: https://en.wikipedia.org/wiki/Hexspeak


Amazing report


I routinely see these bugs when the new hardware is still getting developed.


They're completely unavoidable short of having all of the world's entire computing power with which to do formal verification (and even then, there's no guarantee).

I've even seen, when developing standard cell libraries for a new fabrication process, bugs that occur because of unforeseen interactions between different semiconductor doping concentrations that occur when (due to pure statistics in fabrication) they overlap in the wrong way.


Why is it so exciting when Intel has a bug? I had fun reading :)


I love reading about hardware bugs, and people their perseverance!

Reminded me of a developer for Crash Bandicoot who had seemingly random crashes: http://www.gamasutra.com/blogs/DaveBaggett/20131031/203788/M...


Mr pthread!


Physicists tell if you sum Fabrice Bellard and Xavier Leroy, the universe enters an undefined behavior void.


I will be surely downvoted for this, but I would like to remind everyone how this bug is just one of the many consequences of Microsoft's evil policy of encouraging the sale and distribution of proprietary software in executable form.

There is no other reason why a 64bit multi-core CPU developed in 2015, that makes heavy use of pipelining and other advanced and complicated code execution strategies, would need to support instructions that address the second-to-last byte of a register (eg. %ah) while keeping the rest of the register 'unchanged', which of course means making a complete mess of the code execution path.

The only reason this crap still exists is to keep Windows users' ability to run random EXE and DLL files from the 90s, if not random COM files from the 80s, at the expense of CPU cost, stability, and correctness for everyone else (such as the OCaml developers and users who ran into this bug.)


Unfortunately, you simply aren't correct - the blame here can't be placed with Microsoft. The bug occurs in 64-bit code, which is using the AMD64 instruction set. This instruction set is incompatible with the older 32-bit i386 instruction set, but shares many of its features.

During the move to AMD64, AMD decided to remove many features of the old architecture (for example, removing many one-byte instructions that were taking up valuable opcode space). However, they decided not to touch the AH/BH/CH/DH register access modes - yet it was entirely in their power to do so because they were developing a new architecture with no existing code. So, if you're going to blame someone for this, blame AMD for wanting to keep too many legacy features in what was designed to be a brand new architecture.

For examples of a 64-bit architecture that aren't simply extensions of the 32-bit versions, look at AArch64 (compared with the classic ARM instruction set), or Itanium (Intel's ultimately-unsuccessful attempt to build a brand-new 64-bit architecture cleanly broken from the legacy 32-bit set).


You are right. I stand corrected.


I routinely peddle in binaries for open source, and even my own projects. Building from source is a pain in the ass, usually nondeterministic, and a constant maintenance burden with evolving compilers and build toolchains - not exactly the beacon of stability. EDIT: And dear me, I forgot to mention slow.

If you think stability and correctness are bad now, try compiling random codebases from the 90s with a motley melange of random people using random compiler versions - some prerelease, some stable, some unpatched and broken - many of which have been just as if not more aggressive than CPU vendors with regards to things like "undefined behavior" in C and C++ codebases.

Yes, microcode bugs suck. No, getting rid of x86 won't eliminate them. No, getting rid of QAed, tested, and sometimes disassembly-verified binaries (for such things as verifying fixed duration crypto operations) in favor of compiling with any and every C compiler under the sun isn't going to improve stability and correctness. No, I really don't want to go through the bother of installing configuring and fixing your build chain either.


Did you miss the part about where the bug was found using the current versions of GCC to build the current versions of OCaml?

It's lazy to the point of dishonesty to act as if Microsoft is the only one with decades of accumulated code.


There is still a point that proprietary binaries are probably the biggest force keeping decades of cruft in processors. (Not judging here)


Those "decades of cruft" are usually emulated in microcode and not actual silicon. Intel knows very few people use the BCD instructions, but it costs them almost nothing to keep them in while running them slightly slower than most operations.

Why mess up a stable API when you don't have to?


Precisely. I would go as far as saying proprietary binaries for Microsoft systems are the single force making Intel processors keep decades of cruft, considering the immense cost that Intel must bear to keep those old instructions (barely) running on modern processors.


That's not what happened. You can reproduce what gcc does easily by trying to compile the following code:

  int f(int x) { return x | 256; }
Gcc generates the following code:

  movl    %edi, %eax
  orb     $1, %ah
  ret
In contrast, clang uses an orl instead of orb. The advantage of using orb is that it generates shorter code with an immediate operand. This is on an x86_64 architecture and does not involve 32-bit code.


My point is that binary executables for Microsoft OSes, and the entire market built around them, are the only reason something like %ah (which is an incredibly stupid and expensive thing to address and the actual source of the bug) is part of the x86_64 architecture in the first place.


That does not make sense. First, Windows in 64-bit mode does not even support running 16-bit applications. Second, there's plenty of 16-bit code that's completely unrelated to MS Office/Windows (such as MBRs; that's because the BIOS boot process requires real mode).


That doesn't at all follow. Microsoft didn't invent the AH register.


That isn't true. You can see that by this bug having been found on a system not running Windows.


Shipping software executable form? That ship has sailed, long, long ago. Shipping source is both of little use to the vast majority of non-huge customers, and a way of guaranteeing that your software will be pirated. In fact, the demands of getting paid are moving us more and more to SaaS.


Also, it's just trading one class of problems for another. Rather than supporting a compiled binary you need to support the entire build environment for every customer; having worked at a vendor who did that in the 90s I'm deeply skeptical that it would be an improvement.


Shipping source has many practical issues for software deployment, including security implications, which is why many are moving away from using things similar to "pip" and towards things like "docker" (among other reasons).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: