Hacker News new | past | comments | ask | show | jobs | submit login
Problems with the Golang runtime and toolchain (dtrace.org)
176 points by bcantrill on Dec 30, 2014 | hide | past | favorite | 73 comments



At this point, I suppose I am as responsible for the code and decisions the author is complaining about as anyone.

Different people come from different backgrounds and, based on that, find different tools useful. Ultimately you need to use the tools that let you get your job done most effectively, however that is defined in your particular case.

The Plan 9 toolchain has worked and continues to work well for Go. Most importantly, it has been something we understand completely and is quite easy to adapt as needed. The standard ABIs and toolchains are for C, and the decisions made there may be completely inappropriate for languages with actual runtimes.

For example, no standard ABIs and toolchains supported segmented stacks; we had to build that, so it was going to be incompatible from day one. If step one had been "learn the GCC or LLVM toolchains well enough to add segmented stacks", I'm not sure we'd have gotten to step two. (Later, for gccgo, Ian did have to add segmented stack support to the GNU gold linker, but Ian also wrote that linker, so he was uniquely prepared.)

As another example, Go 1.4 replaces segmented stacks with stacks that grow or shrink, as needed, by moving. And Go 1.4's garbage collector (GC) is also completely precise wrt stack references into the heap. These require precise information about which words on the stack are pointers and, for the GC, which words are initialized and live. Go 1.5 will use that same information to introduce a concurrent (bounded pause) garbage collector. Getting the information through to the runtime was non-trivial but possible for us to do with the Plan 9 toolchain. If we'd built on GCC, it would have required first threading all that stack map information through the GCC or LLVM backends and into a form that the runtime can use efficiently. Like before, I doubt we'd have gotten to step two. (I don't know how gccgo is going to tackle this. It's a big undertaking. I expect gccgo will have segmented stacks and imprecise stack scans for a while yet.)

The toolchain is quirky and flawed in many ways. Some people don't like typing shift so much in assembly files. Some people don't like the center dot hack (glad the author didn't find the Unicode slash!). Those are really cosmetic details, and if you stop there, you miss the point, which is that it's a small toolchain that we can keep in our heads and make arbitrary changes to, quickly and easily. Honestly, if we'd built on GCC or LLVM, we'd be moving so slowly I'd probably have left the project years ago.

The linker in particular is a piece of code in transition. The author is 100% right that it's not great code. But it does work well, and it is being put to production use at many companies as you read this. The code to produce ELF object files and dynamically linked binaries in particular is new, is my fault, and was quite difficult to fit into the original structure. The linker is overdue to be replaced. The new linker is in the Go tree in src/cmd/link, but it's unfinished.

Using a standard linker instead doesn't work well for a managed language. The Go runtime needs to know where all the pointers are in the data and bss segments, and a standard linker won't put that information together for you. The Go linker does that. The Go linker also writes out runtime information for unwinding stacks during GC and translating program counters to file:line for use in stack dumps. Again, this is all much easier to do with a toolchain that we control entirely than having to build into existing ones. (I am not aware of any way to get file:line information through an existing linker except in DWARF tables, which is overkill. And then there's OS X and Windows, where you _really_ have no control over the toolchain.)

The calls to throw everything away and start over are at best hyperbolic. Certainly there is room for improvement, but the custom toolchain is one of the key reasons we've accomplished so much in so little time. I think the author doesn't fully appreciate all the reasons that C tools don't work well for languages that do so much more than C. Most other managed languages are doing JIT compilation and don't go through standard linkers either.


Thank you for posting that. It was never a mystery why the project started with the Plan 9 C compiler (if you're Ken Thompson, of course you'll use Ken Thompson's compiler) but the detail on the evolution of the thing is interesting.

It is a bit embarrassing that our self-moderation here did not cause your comment to rise a little higher toward the top.


This is an excellent reply, but it makes crystal clear one point: Go doesn't produce "native" code other than in the very limited sense of binaries that can be loaded directly by the OS. Go is a lot more like Java than C or Rust. It is Java with some different decisions than HotSpot: all code is AOT-compiled, and the result is statically linked with the "VM". In fact, because the JVM spec says nothing about JITs or static linking, a JVM that is implemented just like Go is a perfectly compliant JVM, and, in fact, there are some JVMs ([1], [2]) that behave exactly the same way (they're used for hard realtime embedded software in avionics and similar domains)

[1]: https://www.aicas.com/cms/

[2]: http://www.atego.com/products/atego-perc/


Thank you for being an excellent example


He could have made the couple of points he made worth making without all of the douchebaggery. Also can't tell if he's just trolling with all the stuff about Unix vs Plan9 and calling Rob Pike out as an amateur or if he's just woefully unaware of the history of both Unix and Plan9. In either case the first paragraph is illuminating as to whether any of this matters to you:

"In the process of working on getting cgo to work on illumos, I learned that golang is trash. I should mention that I have no real opinion on the language itself; I’m not a languages person (C will do, thanks), and that’s not the part of the system I have experience with anyway. Rather, the runtime and toolchain are the parts on which I wish to comment."

For anyone not working in the internals of the golang compiler/runtime, as long as the language is good and the tools produce the proper output when given the proper input, who cares whether they offend someone who writes only in C (and keep in mind I say this as someone who wrote code in C/C++ for almost 15 years, and still has a very soft spot in my heart for C... but not C++, fuck C++).

There seems to be a separating line between people who love and hate Go (well, one of a few such lines) which involves the Go team's convention of making stuff that should be scary for most programmers look and feel scary (see: unsafe package, some of the stuff in cgo, etc). IMO, they take the right approach (keep the easy stuff easy, make the hard stuff possible, and warn of landmines [both explicitly and implicitly] early rather than later) but some subset of people seem to get offended by these choices and lose sight of everything else Go offers.


he's just rage quitting on his last day of work. see the next post on that site, where pike is called a "world-class jackass" and Plan 9 "a colossal failure".

referring to anything Plan 9 as "colossal" is a new one, i admit :)


Hey, as going out rage quit rants go -- I honestly enjoyed it... it had a spark. I don't agree with (most of) what he thinks, but at least he did it with some style. Hope he enjoys the potato farming life, or whatever the heck he does next.


> The real toolchains everyone else uses were not invented at Lucent nor Google

Plan 9 is not a product of Lucent. It is a product of Bell-Labs. Lucent killed Bell-Labs. Calling it a Lucent product is akin to the worst insult to us you could come up with, unless you said it was GNU-like.

Plan9 C is a product of Thompson, Pike & Ritchie, you can't get more C than that, whatever your standards committee thinks.

> Without the benefit of decades of lessons learned running Unix in production

I'm not sure what the author of the rant thinks the people who invented Unix were doing all those years.


>I'm not sure what the author of the rant thinks the people who invented Unix were doing all those years.

I'm not sure what you think they're doing, but they were not running Unix in production.

I've read an interview with Ken Thompson taken around the nineties, and he mentioned they were using Windows desktop machines. And true to the words he had Windows on the monitor on his desk.

And of course they were doing academic and researchy stuff. Not running Unix in production.


What horse pucky. The first Unix systems were used by Bell Labs' patent department to process patent applications. It was this first customer of the Unix group that sponsored the development of Unix for many of its formative years.


>The first Unix systems were used by Bell Labs' patent department to process patent applications. It was this first customer of the Unix group that sponsored the development of Unix for many of its formative years.

That's the seventies. Or at best early eighties.

We're talking about the last 30 years.

Not to mention that whatever customers Bell Labs had, doesn't translate that Richie, Pike and co were running Unix in production or cared about those systems and those customers themselves...

They had moved on to other researchy stuff...


POP was written in 1999

The last Unix release with people from the the Lab was UnixWare 7.1 (1999), the last from AT&T in 2001.

Anything else is not Unix.

We know Rob wrote Utah 2000 in that year.

That's 15 years ago.


That's vague enough that it might very well have happened before desktop plan9 was practical for everyday use.


The most famous recent photo of Dennis Ritchie has him sitting in front of a windows desktop, drawterm-ed almost full screen into a Plan 9 CPU server. (drawterm is the name of the program that allowed you to connect local resources to a CPU server and essentially present a window on your current desktop as a Plan 9 terminal)

http://www.thetimes.co.uk/tto/multimedia/archive/00221/97222...


> Plan9 C is a product of Thompson, Pike & Ritchie, you can't get more C than that, whatever your standards committee thinks.

Really?

That looks like an obvious contradiction. Call me a language descriptivist. If everyone else means something else by C, then the thing in question isn't C.


"I used to be with it, but then they changed what it was. Now what I'm with isn't it, and what's it seems weird and scary to me..." -- Grandpa Simpson


hey f2f. This is maht.

It's Go Backlash-Day on HN. The pedestal they've been putting it on must have got high enough.


heya, maht! nothing you read below will be news to you, but for the record here's what i think:

as far as the technical arguments against the Plan 9-descendent tools are concerned, yes, they're outdated. but the lack of cruft accumulating over the last 20 years in that toolchain is also the reason why they went with it as bootstrap for Go -- it was simple enough then. this is changing, that's why most of the Go runtime isn't written in C anymore.

the author made the argument that Go's toolchain is hard to port. this is contrary to previous experience with Plan 9's toolchain. i'm hoping 4ad (who did the port to solaris/illumos) can weigh in with his experience, but from what i saw in the commits the process did not take altogether long or appeared arduous. cgo is a separate matter, as we clearly see :)

as far as the author's personal dislike for all things pike that's a different issue altogether. and quite amusing :)


> i'm hoping 4ad (who did the port to solaris/illumos) can weigh in with his experience

What can I say :). I love the Plan 9 assembly. It's the same on every operating system (even though the calling conventions are different on different systems!). It has some higher level constructs that make the assembly more consistent between different hardware architectures. It's verifiable, to some degree, by go vet.

It's not great, there's nothing great to it; in many respects it's old fashioned and anachronistic (static register, really? In Go?), but it works just fine. Other assemblers work fine too. I never felt the need to complain about assemblers. They are such a tiny, trivial, implementation detail I am amazed to see them mentioned at all.

As for the Plan 9 C dialect, the Go toolchain recently removed the C compiles, so I don't see the point of discussing Plan 9 C at all, although I like Plan 9 C quite a lot too.

The linkers work the way they work because they originally supported only static binaries. I don't like the linkers at all, although I like what they do. Cross-compilation is a marvellous thing. The code sucks though, I hate it. But there are new linkers in the work. While I can rant away for days about the linkers (I hate the fucking code), I can't complain about the features. I love the features and how they work. I can just complain about the (inappropriate) code.

The Solaris port was definitely slowed down by the fact that on other Unix systems, Go encodes the system call table. I had to add ELF support for linking with shared libraries, in order to not encode the system call table on Solaris. But I only had to do that once. Say, if I port Go to Haiku (God forbid), the code will now exist, and it will work. If Go supported linking with shared libraries from the beginning, someone would have still written that code. ELF shared library support takes time to write and it's irrelevant (from my perspective) whether I had to do it or someone else. Someone has to write it at some point, the total engineering time spent is the same.

As for the portability of the toolchain, it's pretty portable, though porting to a new operating system is a too minor of a job for that too matter. Time is dominated by other effects. But you can see it if you port it to a new architecture. I am doing the arm64 port, and the Go compiler was very easy to port, much easier than say, gcc.


> The Solaris port was definitely slowed down by the fact that on other Unix systems, Go encodes the system call table.

Is there any reason that you can't just encode the system call table on Solaris too?


Yes, unlike Linux, the system call table is not guaranteed to be stable. Also, libc does a lot of things, like set up TLS. It's also critical in the signalling path.


If the system call table isn't stable, how can you ship statically linked binaries?


On Solaris you don't ship statically linked binaries. All Go binaries are linked with libc.so. In fact, amd64 Solaris doesn't even support statically linked binaries at all. It won't exec them.

The situation is exactly the same as on Windows. Windows Go binaries are dynamically linked agains Windows standard libraries, and Go binaries use the shared libraries instead of doing system calls themselves.


Ask Chuck Moore what he thinks about ANSII Forth.

I'll save you the trouble : "The ANSII Forth standard does not describe Forth, but a language with the same name."


I agree but I want to add that this is a phenomenon across many entities in human life, over a period of time.

Cities changes with time and people who stayed in that city decades ago, when they visit recently, will get the similar impression. Any one who saw a phone in late 70's and saw it now, will get the similar impression. Even jobs also change where designation/title may remain same but underlying technologies,tools,working hours,team composition, challenges,expectations on delivery ...etc all may change.

In conclusion, this is natural phenomenon w.r.t time, where top remains same and change happens below.


I agree with the author's main point that the golang toolchain has a bad case of NIH syndrome, and many of the specific problems he mentions have already been Solved™ in other tools for some time. But there are reasons to have full control of your entire toolchain, one of which is not having to deal with getting patches merged upstream (in any of the dependencies). Another is having the toolchain easily deployable on a multitude of platforms including windows and plan9. The solutions he mentions are heavily reliant on specifics of the unix environment, and most such projects would not welcome platform-neutralization. (If your windows toolchain deployment includes "install MinGW and use Cygwin", it's no longer "easily deployable" there.)


Pike said of the asm syntax a while back "It's a mess for which there is no good excuse aside from history": https://groups.google.com/forum/#!topic/golang-nuts/6-qcvLnW... .

Also, later in that thread:

> It's not a proper assembler, only a way to generate the startup object files necessary to get the binary running, and is neither clean, complete, nor documented.

> None of that is excusable, only true.

Plan 9 C compilers, incidentally, are going away too as soon as they port the rest of Go into Go.

It's clear to me there was path-dependency--some folks who mostly worked on Plan 9 started a little skunkworks-y language project, and rather than deciding to learn the LLVM world and slot themselves into that à la Rust or similar, they brought along the toolchain they'd been working with since forever.

To me, that's more like any other case of sticking with legacy code than NIH--it's "I'm using these tools I've worked a lot with because that's a lot cheaper for me than switching" not "I'm going to build tools from scratch right now because [I think I can do better|it's fun|rabid badgers made me do it]."

I wouldn't mind using LLVM or some such to handle the lower levels at all; I hear it's great elsewhere and I specifically hear Go's ARM codegen is pretty wonky. But it's also expensive to do and a lot less important to most of us than the things the team's working on, like lowering GC pause times.

Finally, in case anyone read it as serious, "Instructions, registers, and assembler directives are always in UPPER CASE to remind you that assembly programming is a fraught endeavor" is a joke. This is the same language that has 'notwithstanding' as an (ignored) keyword; not everything deadpan is serious.


>I agree with the author's main point that the golang toolchain has a bad case of NIH syndrome,

the blog seems to be from a Sun old-timer who pretty typically suffers from ... err ... enjoys this superiority style complex of "we have already invented everything, why stupid word chooses to go its stupid way instead of that great way we think it should go". This complex (and, as a major result of it, refusal to accept reality) is one of the major things while Sun went down. (hi! to Sun guys. Have you seen the abomination what FB did to our nice offices in MPK!? :)


I hear they have a Sun sign there, to remind employees what happens to companies that stop trying. Or perhaps that's another office - or maybe just rumour.


It's on the back of the sign in front of the campus: http://www.businessinsider.com/why-suns-logo-is-on-the-back-...


Taking the not invented here idea with Go and the recent generation lost to the bazaar post ( https://news.ycombinator.com/item?id=8812724 ) is somewhat interesting. I see Go from a user perspective as having both those design and culture aspects mixed together (successfully IMO).


This guy seems to have a vendetta against Pike & Co:

> Unsurprisingly, Plan9 is a colossal failure, and one look at the golang toolchain will satisfy even the casual observer as to why; it was clearly written by people who wasted the last 30 years. There are plenty of other problems with Mr. Pike and his language runtime, but I’ve covered those elsewhere.

From http://dtrace.org/blogs/wesolows/2014/12/29/fin/


What vendetta? Besides Plan9, which was decent for what it attempted, those are more or less true statements. Golang doesn't have anything that show it kept up with 30 years of advances in computing/PL design.


The author sums up his attitude in one of the first sentences: "I’m not a languages person (C will do, thanks)".

Go is not C, and it's not going to be like C. It's not really reasonable to expect it to have the same toolchain conventions as C. I'm not saying the choices to Go authors have made are necessarily good, but I don't dislike them only because they're different from C.


>Go is not C, and it's not going to be like C.

And yet, it's not much above C -- and in some cases, it's less.


So much ad hominem – arguing of motives for designs and decisions. It makes for an odd contrast of technical opinions and unsupported speculation.


Ignoring the rhetoric for a second, are there examples of these design decisions being asked about and being dismissed or flat-out not answered?

Most of us have seen the back and forth over generics in Go, including Rob Pike's own blog? post about why Go doesn't have them. With that said, though, I feel like I've also seen a lot of internals-related discussions with good discourse from all sides.

Whatever the existing design is or isn't, it doesn't seem fair to go off on such a rampage unless there's some sort of documented example that they are actively trying to avoid simplifying things and really are seeking to fulfill all of the things the author wrote. So... is there any evidence of that?


This is only remotely related, but the golang maintainers have refused to include support for the (insecure, but used on some legacy systems) ECB encryption mode, because it's insecure: http://code.google.com/p/go/issues/detail?id=5597 .


Although they don't have an ECB func, You can trivially implement it from their AES library. I imagine the thinking goes something like, if you don't know enough to write a couple lines to implement ECB, you probably shouldn't be using it. It really doesn't take a deep understanding of crypto understand ECB.

ECB is the simplest AES mode, you just encrypt each block of your message using the same key. Here is an image of a penguin encrypted with AES-ECB:http://upload.wikimedia.org/wikipedia/commons/f/f0/Tux_ecb.j...

Do you see the problem?


Yeah, this. Didn't realize people actually saw this as an issue worthy of worrying about.

The Matasano Crypto Challenges I did way back when were in Go and the ones involving ECB were just done as for loops over the standard library AES Decrypt/Encrypt methods, something like this:

  for i := 0; i < blockCount; i++ {
    start := i * blockLength
    end := start + blockLength
    aes.Decrypt(decryptBuffer, input[start:end])
    output = append(output, decryptBuffer) 
  }


The fact you can still see the penguin is the least of ECBs problems. If you can do an adaptive chosen plain text attack then you can easily retrieve the key when ECB is used.


The most direct answer, which the author only mentions in passing, is gccgo. The author gives vent to the angry voices in his head by attacking the primary go implementation and Rob Pike personally (but ignores the intrinsic merits of the language itself), but gccgo works great.


The headline "Golang is trash" is very misleading. The author has nothing to say about Go the language, but spends a lot of time on the toolchain - which is arguably not part of the language at all.

In any case, it's not surprising that much of the `C` toolchain was ditched. There are few things more frustrating than trying to compile and link complicated C programs across platforms (the only thing that comes to mind are C++ programs).


Wait til Keith discovers Go passes arguments on the stack, totally ignoring SPARC's register windows.


I think it's sometimes good to ignore what has come before and start fresh, if you have the resources to throw at that, as the Go team at Google apparently does. Platforms and toolchains accumulate cruft over time; I guess the Go team decided it was time to simplify by, for instance, using their own link editor. I think that in an ideal world, they'd forego dynamic linking entirely; Plan 9 doesn't have it, after all.

The things wesolows pointed out about the Go asm dialect do seem weird, though. And what's with the Unicode dot character?


The problem, honestly, is that they didn't start from scratch: they used a bunch of their Plan 9 tooling to jumpstart it. While I don't quite share Keith's invective, I do have a lot of questions that haven't really been answered about why it was done this way. In particular (and a point that Keith only makes in passing), the fact that the Go runtime directly encodes the system call table is a very peculiar design decision that makes the system much, much less portable. And lest anyone point to the many ports: it has been ported to other systems only with great frustration -- which is part of what you're seeing in Keith's rant here...


Expectations are really important. I tend towards higher level languages (Python, JavaScript, toyed with Haskell), but I can empathize with the author's frustration of using a tool that breaks his assumptions.

Software is complicated business, and making new things that work similar to old things is a reasonable (and often successful) way to deal with the complexity. However, if it isn't close enough to the original, it might cause more problems, as it did for the author when he was using this not quite real assembly language.

I really hope that some knowledgeable HNers will weigh in on this, it is fascinating to read about.


> Writing in normal (that is, adult) assembly language is not fraught at all.

The author identifies as someone who enjoys C programming (so I do not doubt their sincerity here), but for those of us using languages with memory safety, I think this may not apply. Particularly when I consider security, I cannot help but worry about any C/assembly I write.


In other news "This hammer is trash because it doesn't screw in nails properly!"

I'm not interested in compilers, I hate writing code in low level languages, I cannot for the life of me understand assembly and Go works extremely well for what I want to do. I've used PHP/C#/Javascript my whole life and I'm more concerned about writing software than writing code. I don't want to nitpick on the tiny details of a running software with memory registers, cpu cycles, cache miss, etc unless there is an obvious bug or performance issue I need to fix. Those things should be abstracted away from me as a developer and go is a great compiled language that looks and feels like an interpreted language from a developer point of view.



This post was briefly killed by user flags. It's not hard to see why: its deliberately outrageous linkbait title violated the HN guidelines. Please don't do that again.

The post itself is substantive [1], so we've unkilled it and attempted to give it a neutral title.

1. Edit: though dismayingly not free of the same drama-mongering. Hopefully the HN thread will stay substantive in response.


Fair enough -- and thank you for seeing that it is, indeed, quite technically meaty (I submitted it, but I didn't write it). Please note that I titled the submission with the title of the blog entry itself -- which feels like it should always be safe. While I see the line in the guidelines you're referring to ("please use the original title, unless it is misleading or linkbait"), it's hard to know where that line is. In this case, it feels like the title is certainly acidic, bitter, wrathful, etc. -- but I don't think it was intended to be linkbait.


I think this post is linkbait. The author's title is "Golang is trash," but the post doesn't discuss anything about Golang. Golang is a programming language, and the author admits that he has "no real opinion on the language itself" (his words). This blog post discusses some pretty esoteric details about a particular toolchain, which is not even the only production-ready toolchain available for Golang at the moment, as the author freely acknowledges. If there were a blog post titled "Lisp is trash," that proceeded to talk about obscure implementation details of a certain version of CMUCL, I think it would be flagkilled-- and this should be too.

I also feel like there are some factual errors. He chides the Go developers for "arrogantly ignoring" existing tools for linking code. But the gccgo integration was done by Ian Lance Taylor, who is on the Go team at Google. I believe Rob Pike is on record as saying that he wanted multiple implementations of Go to increase the robustness of the language.

I would appreciate a good critical look at the Golang runtime and toolchain. How does its performance stack up, how fast does it compile, etc. But this ain't it. This is a guy complaining that he doesn't like the naming of registers in machine-generated binary files. I will literally never need to care about any of this. I don't even need to know this to make use of assembly language in my Go programs, since I can use the C library integration for that.


I probably should have been clearer. By "substantive" I meant substantive enough not to be flag-killed. In most such cases we typically replace the title with a more neutral one and, if the thread goes flamewar, penalize it. It's rare for a controversial post about programming to get killed outright, unless it's obvious junk.


The title was certainly provocative, but I don't see any justification for calling it "outrageous linkbait".

Actually, I think the term "linkbait" is thrown around far too freely here and should probably be banned.

It is a substantive article though, even if one happens to disagree with the author's conclusions. I'm glad it was resurrected from flagkilled state.


Call it provocative if you prefer. Actually a more accurate term might be flamewar bait. Either way, it should be obvious that such a title is inappropriate for HN.


Out of curiosity, what was the previous title?


Golang is trash


> This post was briefly killed by user flags.

So what happened with those users that flagged a link without reading the article?


The previous title was inflammatory and by itself may deserve flagging, even though the content of the article actually has substance (in between the white-hot flames).


Doubt an article titled "PHP is trash" (with the same quality of content) would have been flag killed


echo `curl http://dtrace.org/blogs/wesolows/2014/12/29/golang-is-trash/ | html2text | wc -w` words of seething envy


Oh wow, didn't know PS crowd so strong on HN.


Ha, joke's on you: I've learned NOTHING!


how does go compile so quickly? http://stackoverflow.com/questions/2976630/why-does-go-compi...

because they fucking forgot incremental compilation http://en.wikipedia.org/wiki/Incremental_compiler


Wait, it's fast because they didn't include incremental compilation?


It's fast because of dependency analysis. They forgot about something called incremental compilation.

This is what they should have spent their 20% on: https://gcc.gnu.org/wiki/IncrementalCompiler


Actually they do have incremental compilation at the package level. Use go install and you'll get it. Use go build and you throw away intermediate build artifacts so you don't get it. This is all documented.

In practice compilation times are not a pain point for golang.


Even better, `go get ./...` installs all dependent packages.


So why is the Go compiler still faster than compilers that do have incremental compilation? The implications of your wording is confusing.

Also, Go does have incremental compilation at the package level.


their wording was confusing; they were being sarcastic, implying that the go developers made compilation of a project from scratch so fast because they weren't aware of incremental compilation (which is a false assertion, I assure you).


"Golang is trash" is the first line of this article, and yet somehow it makes HN front page.


Articles are supposed to be technically correct.

Is there any rule about also having to be tactful if they fulfill the first criterion?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: