Hacker News new | past | comments | ask | show | jobs | submit login
“This change deletes the C implementations of the Go compiler and assembler” (github.com/golang)
306 points by osw on Feb 23, 2015 | hide | past | favorite | 126 comments



So what is the bootstrap process going to be? Other than already have a Go compiler I mean. Or is it have a Go cross compiler?

Maybe it matters less, you used to always assume bootstrap from C but that more or less died with C++ based compilers, although you can do a multistage bootstrap from the last gcc before C++ still.


It is explained in the design document: https://docs.google.com/document/d/1P3BLR31VA8cvLJLfMibSuTdw...

Basically, you start from the last C version, and every version is supposed to be able to compile the next one.


So if you want to avoid trusting trust, you need to audit not only a C compiler and the source code for the Go compiler you plan on using, but also every past Go compiler as well?


You could audit the Go source and then use the diverse double compiling technique[0] to verify that the binary you're using corresponds to that source code.

[0] http://www.dwheeler.com/trusting-trust/dissertation/html/whe...


You mean, audit a large number of independently-written Go compilers?


Nicely done. To be fair though, Go is just 6 years old and still evolving.


Exactly. Which is why I think it's important to keep the option of compiling its compiler from a language which has a large diversity of compilers.


I don't believe it has been tried yet, but it seems entirely possible to use gccgo as the bootstrap compiler.


You can't avoid trusting trust. "Ken was here" :)


You actually can. Proof here: http://www.dwheeler.com/trusting-trust/


I was alluding to Ken being the author of the old, as well as one of the authors of the new compiler. In jest :)


You can. You just have to bootstrap all the way up.


What if any intermediate version is found to contain the violation of trusting trust? Every go maintainer has to build every version sequentially from that version to current version.


Ah ok, so there will be a pretty long chain from 1.2 eventually, but hopefully it will be part of the test suite...


You should always be able to build a Go 1.x compiler with just the 1.4 tool chain binaries. We have committed to sticking to the Go 1.4 language and libraries for the compiler tool chain.


Where is this fact documented? Are patches tested against 1.4 tool chain binaries?


> Where is this fact documented?

http://tip.golang.org/doc/install/source#go14

> Are patches tested against 1.4 tool chain binaries?

The builders build the tool chain with Go 1.4, so the build dashboard will show failures if a patch incompatible with 1.4 is submitted. (We have pre-commit trybots that do the same.)



Usually you don't keep the chain. You just keep a working compiler. You can also bootstrap from another implementation of the language, e.g., gccgo.

A common trick is to keep a highly portable interpreted version of the target language and then use this for bootstrapping, but often you attack new architectures by cross-compilation instead. It all depends.

Also, it is common for self-hosting languages to require themselves to build.


Either cross compile, or get the last C-based Go compiler, and use that to build the more recent Go releases.


> So what is the bootstrap process going to be? Other than already have a Go compiler I mean. Or is it have a Go cross compiler?

What's the bootstrap process for the C compiler part of the compiler?


The is no C code in the compilers anymore.


But if one has access to the Go compiler source, is it not a reasonable guess that one would be able to access a go compiler binary from the same trusted source? I just don't see this as a problem since it is impossible to build a computer from the ground up without trusting a lot of software first.


Probably gccgo, which is included in gcc.


For the purposes of bootstrapping.


I still just don't understand why they insist on building their own toolchain. It just doesn't make sense to me.

When you set out to build a programming language, what is your objective? To create a sweet new optimizer? To create a sweet new assembler? A sweet new intermediate representation? AST? Of course not. You set out to change the way programmers tell computers what to do.

So why do this insist on duplicating: (1) An intermediate representation. (2) An optimizer. (3) An assembler. (4) A linker.

And they didn't innovate in any of those areas. All those problems were solved with LLVM (and to some more difficult to interact with extent GCC). So why solve them again?

It's like saying you want to build a new car to get from SF to LA and starting by building your own roads. Why would you not focus on what you bring to the table: A cool new [compiler] front-end language. Leave turning that into bits to someone who brings innovation to that space.

This is more of a genuine question.


> I still just don't understand why they insist on building their own toolchain. It just doesn't make sense to me.

To quote rsc from https://news.ycombinator.com/item?id=8817990:

"It's a small toolchain that we can keep in our heads and make arbitrary changes to, quickly and easily. Honestly, if we'd built on GCC or LLVM, we'd be moving so slowly I'd probably have left the project years ago."

"For example, no standard ABIs and toolchains supported segmented stacks; we had to build that, so it was going to be incompatible from day one. If step one had been "learn the GCC or LLVM toolchains well enough to add segmented stacks", I'm not sure we'd have gotten to step two."


Which is of course no answer at all.

Their own explanation for wasting hundreds of thousands of man-hours on a "quirky and flawed" separate compiler, linker, assembler, runtime, and tools is because they absolutely needed an implementation detail that is completely invisible to programs and which they are now replacing because it wasn't a good idea in the first place (segmented stacks). And it's worth writing out a 1000 word rationalization that doesn't bother even mention the reason that implementation was necessary in the first place, to better run on 32-bit machines. In 2010.

Or they say that they had to reinvent the entire wheel, axle, cart, and horse so that five years later they could start working on a decent garbage collector. Never mind that five years later other people did the 'too hard and too slow' work on LLVM that a decent garbage collector needs. What foresight, that.

That's not sense, that's people rationalizing away wasting years of their time doing something foolish and unnecessary.


The replacement to segmented stacks is copying stacks, which as far as my knowledge of LLVM takes me, would be very difficult to add. You need a stack map of pointers to successfully move pointed-to objects on the stack from the old region to the new.

There is a great deal of work going on in LLVM on this issue for precise GC of other languages, and (from the outside) it looks like more hours have been spent on it than on the entire Go toolchain. As Go developers don't have the resources or expertise to make such wide-ranging changes to LLVM, it would have blocked Go development.

GCC is similar. Those working on gccgo are trying to work out how to add precise GC and copying stacks. It is much more complex than it was on the gc toolchain.

There is great value in having a simple toolchain that is completely understood by the developers working on it. In fact, that very idea, that code you depend on should be readable and widely understandable, is one of the goals of Go. Applying the goal to the toolchain is a case of eating our own ideological dogfood.


> The replacement to segmented stacks is copying stacks ...

Which again is not an answer. Why are segmented stacks necessary? Why are copying stacks necessary?

This reasoning, which is their best apparently, amounts to saying that they had to implement their own compiler, linker, assembler, and runtime because they decided they had to implement their own compiler, linker, assembler, and runtime.


> Which again is not an answer. Why are segmented stacks necessary? Why are copying stacks necessary?

Because Go wants to provide very lightweight goroutines for highly concurrent and scalable services.


Lightweight goroutines depend on small stacks. General API design in Go depends on lightweight goroutines.

In particular, Go style is to never write asynchronous APIs. Always write synchronous blocking code, and when you need to work concurrently, create a goroutine.

You cannot do this in C with pthread, because OS threads are too heavyweight. So you end up in callback-based APIs that are harder to use and harder to debug (no useful stack traces).

This small feature has a surprisingly wide-ranging effect on the use of the language. It is a very big deal.

Go is very much about reinventing these low-level things.


Threads and stacks are orthogonal. You can have coroutines with contiguous stacks, and threads with non-contiguous stacks.

Furthermore it's extremely easy to use non-contiguous stacks in C just by knowing the stack amount used by functions, which the compiler already knows.

This is a totally absurd reason to reimplement an entire toolchain.


I'm not sure what about my comment is worth downvoting, but to try one more time:

If you came to me tomorrow and said "I want to build a language just like C but with non-contiguous stacks" I agree, I would use LLVM or GCC. But that's not what happened.

The history is three engineers decided to see if they could do better than C++ for what they did every day. That meant trying lots of things. One of the many was goroutines, but they needed a flexible platform on which to try lots of ideas that didn't make the final cut.

It just so happens, two of them had worked on a toolchain before. Ken's from Plan 9. (Which long predates the existence of LLVM.) And as he knew his compiler well, it was very easy to modify it to try these experiments.

In the end the language stabilized with several unusual features, several of which would be difficult to add to other compiler toolchains they were not familiar with. Is that the point they should switch to using LLVM?

Building on a toolchain you know that lets you experiment makes a lot of sense. Knowing a toolchain means you get to work quickly.

The end result still has useful features that LLVM does not. For example, running ./all.bash does three complete builds and runs all the tests. It takes about 60 seconds on my desktop. Last time I tried LLVM, it took minutes. Go programmers love fast compilers.


> If you came to me tomorrow and said "I want to build a language just like C but with non-contiguous stacks" I agree, I would use LLVM or GCC. But that's not what happened.

Except that is exactly what happened. Russ says: "segmented stacks; we had to build that, so it was going to be incompatible from day one."

That's the rationalization though. It wasn't about features that you can all but do in plain ANSI C being 'too hard'. We all know what really happened is that they were comfortable with their Plan 9 toolchain and made a demo using it... which is fine. Then they continued to develop their demo for 5 years instead of throwing it out and doing it right, and now they are stuck having to make excuses for why their compiler and assembler and linker and runtime and tools are sub-par.


I don't think the tools are sub-par. My programs are compile and run quickly on a variety of platforms. My toolchain builds quickly too.

And now it is written in Go, the preferred language of the compiler engineers.


Most of the toolchain already existed. When Ken Thompson started writing the Go compiler he based it on his Plan 9 C compiler implementation.


LLVM is a C++ monstrosity that takes hours to compile. Other programming language projects have to maintain a "temporary" fork of LLVM to achieve their goals: https://github.com/rust-lang/llvm/tree/master


Rust doesn't do this because of the length of compile, it's because we occasionally patch LLVM, and then submit the patches upstream.


What's your counter-proposal?

If you're building a new language, you need a new AST. You can't represent Go source code in a C++ AST.

There are alternate compilers for Go, in the form of gccgo and llgo. But those are both very slow to build (compared to the Go tree that takes ~30s to build the compiler, linker, assembler and standard library). And the "gc" Go compiler runs a lot faster than gccgo (though it doesn't produce code that's as good), and compilation speed is a big part of Go's value proposition.


> There are alternate compilers for Go, in the form of gccgo and llgo. But those are both very slow to build (compared to the Go tree that takes ~30s to build the compiler, linker, assembler and standard library).

For any non-Gophers reading this: I write Go as my primary language, and have for the past two and a half years. I just timed the respective compilation speeds on a handful of my larger projects using both gc and gccgo (and tested on a separate computer as well just for kicks).

gccgo was marginally slower, though not enough to be appreciable. In the case of two projects, gccgo was actually slightly faster. The Go compiler/linker/assembler/stdlib are probably larger and more complex than the most complex project on my local machine at the moment, but I think my projects are a reasonable barometer of what a typical Go programmer might expect to work with (as opposed to someone working on the Go language itself).

The more pressing issue as far as I'm concerned is that gccgo is on a different release schedule than gc (because it ships with the rest of the gcc collection). That's not to say it's not worth optimizing either compiler further when it comes to compilation speed, but it's important for people considering the two compilers to understand the sense of scale we're talking about - literally less than a second for most of my projects. Literally, the time it takes you to type 'go build' is probably more significant.


Thanks. That's good data. I haven't seen any measurements for a few years. It's good to see that gccgo has caught up. Which version of gc did you test?

Yes, the release schedule is another important reason for building our own toolchain. Being in control of one's destiny is often underrated.


On this machine, gc 1.4.1 vs. gcc 4.9.2 (with no extra flags). My other machine has gc's tip, but runs Wheezy so it's probably an older version of gcc... it wasn't much different either way. I would barely have noticed it if I hadn't been timing it.


I would never set out to build a language I wanted people to use and not build it as a front-end for LLVM. I don't want to write an optimizer or assembler.

I don't doubt for one second that llgo takes a longer time to compile. And in exchange for slower compile times you benefit from many PHDs worth of optimizations in LLVM. And every single target architecture they support.

It's easy to build something faster when it does less. I'll admit there's no blanket right answer to that tradeoff.


Yes, that's why there's both gc and gccgo (llgo came later). Apart from the rigour of having two independent compilers, they are seeking different tradeoffs. gc is very interested in running fast, and gccgo benefits from decades of work that have been put into gcc's various optimisations.

Does that answer your original statement that you didn't understand why we build our own toolchain?


Well I still don't understand. Russ says it was for segmented stacks, but doesn't explain why those were necessary. You say it was for compile speed, yet gcc and llvm can crank out millions of lines a code a second at similar optimization levels as the Go compiler. Neither of these are convincing explanations.


Then you will be stuck with the C view of world of what a linker is supposed to do.

Just look at Modula-2 and Object Pascal toolchains as examples of compile speeds and incremental compilation features that could run circles around contemporanean C compilers.

Or the lack of proper module system, which requires linker help.


I was impressed by the toolchain when I first peaked at Go because it was dead simple to get up and running on any platform, especially Windows.

For gcc you have to deal with MinGW. Isn't LLVM just now getting to the point where it can build native Windows applications?

This is one area where I hope Rust makes progress. MinGW/Msys2 is just kind of gross stuff to deal with.


Care to explain what's gross about MSYS2/MinGW-w64? I'm genuinely interested in making it less gross.


Want to make it less gross? Make it completely go away.

Installation of Go or Python is just like any other Windows install. You download an installer.exe or .msi, run it, and you're done. Things compile or run immediately, and you don't have to start using a "special" terminal just for it to work.

My experience with MinGW is very different. Especially for dependent languages. "Step 1: Install MinGW" what does that even MEAN?:

"Ok, I ran this installer, and it brought up the MinGW Installation Manager. Is it done? What am I supposed to do here? Which one do I choose? How do you even select a package? What even ARE packages? OK, so I select something then go the Package menu and select Mark for Installation. It's not doing anything. Is it done now? Close window. Nope that didn't work. Open it back up. Oh, so after marking a package I have to go to the Installation menu and choose Apply Changes. ..."

This actually happened to someone I was trying to help over the phone. Heaven forbid they get lost in All Packages and get confused by the dozens of packages each with half a dozen versions and each with three different, non-descriptive "classes".

Installation needs to be braindead simple. During installation it should show a list of extra languages that can be installed, where you can't uncheck 'base' (with an "Advanced Options" button in the corner that opens up the standard installation manager instead). It should set up any environment variables, including PATH, (and including restarting explorer to refresh the env) and it shouldn't require the use of any terminal other than cmd.exe (despite it being terrible).

If you're installing something else entirely that depends on MinGW, their installer should be able to bundle the MinGW installer, and it should install without having to make any choices. It should detect if MinGW is already installed and install packages there instead, still completely automated.

Make it go away.


I was asking about "MSYS2/MinGW-w64".

You seem to be confusing mingw.org (http://www.mingw.org/wiki/getting_started) and MinGW-w64 (http://mingw-w64.sourceforge.net) and your rant seems entirely directed at mingw.org.

The software distribution I was asking the parent poster about is MSYS2 (http://msys2.github.io/ and http://sourceforge.net/projects/msys2/), do please come back with constructive criticism on that project if you are interested enough to investigate further.


You're correct, my complaints were towards mingw.org, not MSYS2. My apologies for not reading carefully enough. I may actually take a look, thanks for directing me.


There was a recent discussion on Reddit where some MSYS2 users chimed in that which may cover some useful ground:

http://www.reddit.com/r/cpp/comments/2v6vlg/decission_which_...


Yes, I'd be happy to, but I'm not sure they are "solvable" issues, because they seem to be more of architectural mismatches.

Part of the issue with most software that uses MinGW is that it is written with posix-y operating system in mind. That is, operating systems that can very efficiently fork processes and quickly deal with many small files. Unfortunately, Windows does neither well. Process creation is slower, and NTFS is a very lock-happy filesystem.

Why do I consider this gross as a user? Things like Git that utilize msys are slow on Windows. As in, I notice the UI hanging. Things like autoconf are terribly slow on Windows due to all of the small processes that are created to detect the environment. Antivirus tools will lock files that are created and generally slow things down due to the nature of lots of quick-running processes creating and deleting small files.

These are just realities of most software written for non-Windows platforms. So whenever I see a program that requires MinGW, I'm always very hesitant to use it. The user experience tends to be terrible. I can still remember an issue trying to compile subversion on Windows using gcc and having it take well over an hour. Turns out with all of the processes being forked and temp files being created, the antivirus program was adding a delay to every command. After completely disabling antivirus it compiled in 15 minutes.

So, in one sense, this ins't a problem with MinGW or msys, but it typical of software that relies on it.

The other issues I have with them is that they don't integrate well with the native tools on Windows. For instance, Pageant is a good, graphical SSH agent on Windows. You have to mess around with environmental variables and plink and junk to get it so you don't have multiple formats of SSH keys on your machine. Trying to deal with SSH through bash and msys is not a user friendly experience. PuTTY is the gold standard of SSH clients on Windows.

Using msys/MinGW is like running X programs on OS X, Windows programs through Wine on Linux, or Java GUIs on any OS. It has enough strange warts and doesn't quite fit the feel of the rest of the OS.

That is where Go was awesome. I downloaded go and there were 3 exes on my machine. I ran "go.exe build source.go" and out popped an exe.


Thanks, sure, POSIX is a round hole and Windows a square peg, but I think that the excellent work done by Cygwin over the last few years has done a great deal to file down the edges of the Windows peg. MSYS2's hacks on top usually function ok.

With Git, I can clone very large projects almost as quickly with MSYS2 as I can ArchLinux. We did begin to port msysGit (the native, non-MSYS executable, yeah, go figure) to MSYS2 and found very little speed improvement so stopped since the msys2 version is much more functional and always up-to-date.

Using Autotools on MSYS2 isn't significantly slower than on GNU/Linux. You can try building any of the many Autotools based projects we provide to see this for yourself. Besides, for software which relies on Autotools for it's build system, there's no choice but to use it (outside of cross compilation).

That NTFS (and the Windows filesystem layer) isn't fast is independent of MSYS2 vs native Windows anyway.

An anti-virus will slow down all Windows tasks to an unusable crawl, just run your scan overnight and take care about what you click on. MSYS2 isn't hit worse than, say Visual Studio. Fundamentally MSYS2 is software distribution who's end product is native Windows applications aimed at the user. The POSIX stuff exists just helps us get there (this is why we don't provide X Windows; if you want that, use Cygwin), so for example using Qt Creator as supplied by MSYS2 should give an experience that's roughly the same as using Qt Creator supplied by the Qt Project (but much easier to maintain).

Apart from for installing and updating packages, you can avoid the MSYS2 console and just run programs in C:\msys64\mingw64\bin.

The security advantages we bring via shared packages (e.g. libraries) are very worthwhile.

> Trying to deal with SSH through bash and msys is not a user friendly experience. PuTTY is the gold standard of SSH clients on Windows.

Since on MSYS2, things are shared, your SSH keys are shared between all applications that use them in ~/.ssh, as you'd expect. I use mklink /D to unify my Windows User folder and my MSYS2 HOME folders (be careful not to use our un-installer if you do this though, if follows the symlink :-(). We do have putty but I haven't checked that it doesn't use %AppData% or worse, the Windows registry to store keys. If it does that's a bug we'll fix. To install putty:

$ pacman -S mingw-w64-x86_64-putty


The Go team does want to innovate on the toolchain. A key factor in the design of Go is the belief that once a language is "good enough", developers are better served by a superior toolchain (and specifically faster compilation) than by a fancier language. They want to own the toolchain so they can optimize it for Go and make their own tradeoffs about speed versus features.


I read somewhere (but I can't think of the keywords to find it now) that they found the greater flexibility in owning their toolchain was worth the cost. For example they changed their data layout for GC purposes and changed the segmented stack approach over the course of their development and had they been tied to LLVM or gcc they'd have spent much of their time fighting against those implementations, or politicing to convince the maintainers to add additional complexity to their systems for an unproven langauge. (My example is weak because I am trying to retell their reasons and my recollection is vague.) I think they still haven't succeeded in bringing gcc up to par with their current approach.


LLVM supports precise GC now via the late safepoint placement infrastructure [1]. This infrastructure should be sufficient to support both the copying stacks and a precise GC.

This is a recent addition and did not exist at the time Go was created, however.

[1]: http://llvm.org/docs/Statepoints.html


Are you thinking of this comment?

https://news.ycombinator.com/item?id=8817990


That's the one, thanks!


Wow, github doesn't handle big diffs well. Some sort of automatic pagination would really help.


GitHub does cut it off beyond a certain point, but that cutoff point should be much, much earlier.


Github handles it well, but our browsers don't. It would be nice if they loaded large diffs progressively, as you scroll.


Github is a website, it's it's job to make the browser handle it well.


I would make the case the "big" diffs are a problem. Unless there's a really good reason (and bad dependency management is not a good reason) then commits should be smaller and more logically related.


This is a merge commit, which means the diff is going to include all the changes on the branch being merged. Even if all the individual commits are small, a merge diff can still be very large.


While I agree that a big diff isn't a good idea, I disagree with the notion that one should develop in such a way that makes Github (or whatever VCS you're using) work right.

I don't think working within the capabilities of the VCS you're using should ever be a priority for a software development effort; rather I think the VCS's priority should be to allow for their use within most contexts of software development. (the other way around)


" This change deletes the C implementations of the Go compiler and assembler from the master branch." is logically related as it could be. Everybody has a different interpretation of it, I guess.


It's a merge commit, so naturally it's going to have a huge diff even if the actual work was done in much smaller increments.


Because he automatically translated all the code using a tool.


Congrats to the Go team, but that link kills the browser....


You can read the original commit on Gerrit, it's less explodey.

https://go-review.googlesource.com/#/c/5652/


Nice. That's a step forward. Another bit of legacy code bites the dust. Another step forward to the post-C world we need.

(If you want to compile with a different compiler as a check, there's an LLVM-based compiler for Go.)


Go is also supported in GCC, as GccGo.


RSC is awesome.


And the boy pulled up his bootstraps and became a man.


So, if I'm understanding this correctly, they are to re-write the Go compiler in Go, and compile it using the currently published compiler (i.e. 1.4)?

Could someone, kindly, explain how future versions would be built? Thanks!


Future versions will still be built with any current published compiler. There are binary releases for each major release, and it's not hard to avoid using new language features in the compiler, so building from source only requires the most recent binary release (at worst).


My understanding is that they wrote code that translated the C code for the original Go compiler into Go code. This translation wasn't fully general -- it made assumptions about how the C code was written -- but it allowed the port from C to Go to be very precise (i.e. bug for bug). So now that the Go compiler written in Go can compile Go, that's what they'll use going forward, and they will slowly work to make it into more idiomatic Go instead of machine-generated Go.

So to answer your question, this new Go-written-in-Go compiler will initially be compiled by the Go-written-in-C compiler. The output from that will be an executable Go-written-in-Go compiler, and _that_ will be used to compile itself in the future. I.e. Go compiler version 1.4 will be used to compile Go version 1.5 will be used to compile Go version 1.6...

Keep in mind that this is not at all unusual. The C compiler GCC has been compiled using older versions of GCC for a long time. Having a compiler compile itself is a sort of milestone that many languages aspire to as a way of showing that the language is "ready."


Its generally called "self-hosting" when a compiler can compile itself[1]. It was a pretty big deal when Clang became self-hosting[2] in 2010.

[1] https://en.wikipedia.org/wiki/Self-hosting

[2] http://blog.llvm.org/2010/02/clang-successfully-self-hosts.h...


Thank you both for your inputs.


So this means the go compiler is completely written in go?


In source control, yes. There's not yet a stable release where that's the case though; Go 1.5 (due later this year) will be that release.


congrats gophers! That's a big step for the language.


Anyone else seeing this post as the 1st and 2nd link on the front page of HN?


Looking into it now. Edit: hopefully fixed now. Will edit this comment later when we figure out what happened.

Ok, we figured out what happened. A background process that is upgrading old stories to a new data format went rogue and made multiple copies of a few stories in memory. Apparently it agrees with some of you that HN could use more stories about Go.

Sorry for the error.


In case is gets fixed, here's what I see, to help diagnose the bug [1]. Both posts point to https://github.com/golang/go/commit/b986f3e3b54499e63903405c..., have the same HN item id=9097404, but different comments counts.

[1] http://i.imgur.com/xATOXPb.png


This must be the new and improved "eventually consistent HN" that dang has been talking about.


Yes. With the same URL for both discussion and article, though with differing scores and comment counts.


I notice that one submission is "canonical" for the flag/unflag bit. You can flag one of the two submissions, and the "unflag" will show up on the other submission.


Maybe Hacker News is budding. Is it spring already?


It's a huge page. Maybe HN took a while to read the URL and the OP ended up double-posting.


But they both link to the same comments section


And they seem to link to the same URL, which HN is supposed to prevent (for submissions created within a short period).


I'm seeing the same thing with this post as well: https://news.ycombinator.com/item?id=9096843


It's the smart quotes.


Ha. It's always the smart quotes.

They've caused me so many headaches over the years it's amazing to me that they are still actively supported and implemented in any software. What value do they even bring?

#DeathToSmartQuotes


Smart quotes tend to cause a lot of problems... especially since they don't just convert to a standard quote in Unicode, etc... [1]

[1] http://en.wikipedia.org/wiki/Quotation_mark#Curved_quotes_an...


User content posted from a word processor directly into a textarea field = much suffering.


> It's always the smart quotes. #DeathToSmartQuotes

This is pure kogir bait (he is your passionate co-positionist), but as far as we can tell, it wasn't the smart quotes.


The same is true for the "C# Edit and Continue and Make Object ID Improvements in CTP 6" story, presently at #25 and #27.


I'm also seeing duplicates of the 'Add "Magic" to Your Business' post on the front page.


Yes! And with different points, too.


Looks like they diverged. Lots of duplicate comments.

Edit: Nope. All comments show on both.


The comment links are in fact the same: https://news.ycombinator.com/item?id=9097404


Also, one is marked as having exactly double the number of comments and points as the other.


Not for me anymore: http://imgur.com/XM8xmEp


When the double point/reply relationship ended, the comments sections also diverged. This is how it looked while the comments were still identical: http://tinypic.com/r/vn2u0i/8


I've seen it before, a couple weeks ago, but it disappeared rather rapidly last time.


Yep, with different numbers of comments.


yes


The second post wasn't there for the first 30 minutes or so, it just appeared out of nowhere, no idea why.


Great news!


Once you go Go, you never Go back!


Then, clearly, the right path is to never go Go.


[deleted]


Version control ftw!


Git gives you access to all revisions for ever.


you'll still be able to see it if you check out an earlier version.


One does wonder if the register re-naming from their abstract (but misleading) names to their proper machine names (e.g., from "SP" to "R13") wasn't at all a reaction to the (in)famous polemic on the golang build chain.[1]

[1] http://dtrace.org/blogs/wesolows/2014/12/29/golang-is-trash/


SP, FP, and PC are all still there. What we did was make the conventions more uniform across all architectures. The rules for certain corner cases for when SP and PC were references to the virtual register and when they were references to the real register were inconsistent. As part of having a single assembly parser, we made the rules consistent, which meant eliminating some forms that were accepted on only a subset of systems, or that had different meanings on different systems.

I'm a little surprised you brought that post up to begin with. It completely misses the point, as I explained in my comment here at the time (https://news.ycombinator.com/item?id=8817990). When I wrote that response I also submitted a comment on the blog itself with a link to the HN comment. That blog comment has not yet been published. If you're going to keep sending around links to such an inflammatory blog post, could you also try to get my comment there approved?

Thanks.


No, this was unrelated.

SP, PC and FP are virtual registers, from the POV of the assembler. On _some_ architecture those words have real meanings, like RSP on intel, but on others they are just conventions.

I don't think Keith's rage quit has had a measurable impact on the direction of Go or its toolchain.


Aside from what the other reply to your comment said, using R13 and R15 is actually a move away from standard notation: even though those do correspond to SP and PC, the ARM architecture manual as well as all assembly code I've seen uses the special names for those registers.


That's a classic of the "too rude and opinionated to salvage anything reasonable" genre right there.


Here we go again. Another compiler that can't be bootstrapped from source code. It's a packaging nightmare. Another magic binary to trust not to have a Thompson virus.


> Another compiler that can't be bootstrapped from source code.

It can be bootstrapped from source - it just needs to be bootstrapped either using gccgo[0], or using the 1.4 compiler (which is guaranteed to work for all 1.x compilers, not just 1.5)

> Another magic binary to trust not to have a Thompson virus.

"Reflections on Trusting Trust" gets posted on HN regularly, and it's an interesting exercise, but you are far more likely to have an exploit hiding in plain sight in a compiler compiled from source once than you are to have one that only appears after multiple iterated compilations.

It's a good concept for security experts and compiler developers to be aware of, but the likelihood is incredibly small.

Also, for what it's worth, "Trusting Trust" is over three decades old, and there have been numerous response to it in the interim, with lots of study. It's like saying "Your problem reduces to 3-SAT, and satisfiability is NP-hard, so you can't solve it', throwing your hands up, and leaving it at that. In reality, solving 3-SAT in the general case is NP-hard, but it is well-studied enough that, in practice, solving SAT/3-SAT is actually pretty easy most of the time. Some of these responses have even been posted elsewhere in this thread, though they're also pretty easy to find online as well.

[0] which is written in C++ - frankly, I'd be much more concerned about a single-compilation bug in any C++ code than I'd be about a multiple-compilation bug in Go.


http://www.dwheeler.com/trusting-trust/ < David A. Wheeler’s Page on Fully Countering Trusting Trust through Diverse Double-Compiling, for an example.

Though the diversity available for a go compiler written in go isn't very tremendous.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: