Hacker News new | past | comments | ask | show | jobs | submit login
I rewrote my blog in Go (ironzebra.com)
132 points by hermanschaaf on Aug 14, 2013 | hide | past | favorite | 74 comments



"I am now loading less static assets. I removed the Disqus comments and the many many lines of CSS from the old site and replaced it with only a couple of lines of CSS alongside a CDN-hosted copy of Twitter Bootstrap. Finally, the Go site is deployed to a free instance of Heroku and the MongoDB hosted on a developer version of Mongolab, while the old Django site was hosted on a Webfaction shared server."

So...the Python/Django to Go/Revel comparison is basically worthless then? These are huge changes that completely invalidate any speed improvement the author is trying to prove are a result of using Go.

A lot of upvotes for an article with an obviously flawed conclusion.


And the very next sentence:

>"All these things influence the validity of a direct speed comparison between the two versions"

I hope the author wasn't trying to say, "Go is better always forever." I read it more as, "hey I rebuilt my site in Go, and it's fast." Go has potential - I'll be excited to see what people come up with.


I am left wondering:

Why not just measure/compare the response time of the single request that fetches the (presumably) HTML response, and leave all of the ancillary asset delivery and script execution as an aside? If this is about Go versus Python, all I care about is the portion where those are actually involved. And it would be easy to see in the network panel of the developer tools window in Chrome or Firefox.

Of course, the hosting environment also changed, so... there's that still.


Did you read the next sentence?

"All these things influence the validity of a direct speed comparison between the two versions, but the speed improvement is nevertheless too overwhelming to attribute only to these small changes. And in fact, many of these changes might even have negatively impacted the speed of the new version in exchange for saving on the monthly bills."


That assertion isn't backed up with data, though.

Without showing how many seconds were spent loading Disqus, we don't know how much of the load time improvement was based on eliminating Disqus vs. the switch to Go.

Based on experience I'm guessing the lion's share of the gain is due to eliminating Disqus.


Isn't Disqus loaded client-side? If so, it's not comparing to Go as much as rendering comments server-side, which you'd see performance increases with PHP or Rails as well.


I recently benchmarked an uncached remote ASP site versus a local Node.js copy that returned identical HTML. Of course, Node was 100x as fast. (1ms vs 200ms)

However, in the browser, they felt like the same site. The 199ms head start resulted in only 50-75ms difference in the browser. Anyway, it turned our focus from backend work to CSS and image improvements.


I wish more web developers would come to this conclusion. It's so frustrating to see people trying to save 50ms by optimising slow SQL queries or fixing slow code paths. Meanwhile the user is waiting over 5 seconds to load a page with a bloated DOM, inefficient CSS, synchronous JavaScript, etc.

If your back-end is slow, just cache it and forget about it. Front-end is almost always the bottleneck (the exceptions to this rule being near-real-time dynamic sites like New Relic or GoSquared).


While rewriting things in different language is fun (and fun is a great reason to do stuff), the speed of delivering what is ostensibly static content is a solved problem. This was completely unnecessary. Bake the blog post into static HTML and tune up Nginx to shove it down the wires as fast as possible. Stick it on an CDN somewhere if it's important. Then move on to a problem that doesn't already have an optimal solution, and share the solution if you're nice. That's how everything should be done.


Counterpoint: if you're learning a new language (or a new anything, really), it is much easier to reinvent than to invent. Just depends on what your goals are; whether you care more about the project you're working on or learning the thing you're learning.


Its his personal blog, what do you care what he writes it in? Maybe if it was 10 years ago and getting slashdotted was still a concern you could point out a better way, but if a free Heroku instance can keep up with HN traffic, then perhaps "doing it right" no longer matters. It seems the OP wanted to learn something and see if it could be done, so I'd say he achieved his goals. Getting all butthurt that he didn't achieve yours instead just seems silly.


Even if you don't bake it into static HTML, this is a solved problem.

I serve my homepage/blog off a small Sinatra (Ruby) app, and while that's not slow, mostly because it caches every single page in memory permanently (my written content grows much slower in size than available memory on dirty cheap servers) just because it's simple to do and makes my testing easy (it does stat to check for modifications), there's pretty much no excuse not to have a CDN or a fast server like Nginx with caching turned on in front of app servers these days, which makes the backend performance pretty much irrelevant for cases where you don't have content that needs to be dynamically generated for a huge percentage of requests.


I don't suppose you've put that repo anywhere we can have a look at? :)


"I removed the Disqus comments"

Wouldn't that by itself be enough to account for the download time differences?


Yes:

"I removed the Disqus comments and the many many lines of CSS from the old site and replaced it with only a couple of lines of CSS alongside a CDN-hosted copy of Twitter Bootstrap. Finally, the Go site is deployed to a free instance of Heroku and the MongoDB hosted on a developer version of Mongolab, while the old Django site was hosted on a Webfaction shared server. All these things influence the validity of a direct speed comparison between the two versions"

Indeed, all these things _invalidate_ the direct speed comparison between the two versions.


I don't see how he can say, "Well I moved stuff to a CDN and changed hosts off of a shared server and greatly reduced my CSS file size, but surely it was the fact I changed languages that caused my page times to increase!"

No, author, Go had very little to do with your page load times. Both Go and Django are going to serve a simple blog at the same speeds.


doesn't disqus start replacing a specific div on a page ready event?

If so, disqus loading is 'postponed' to after page loading? Or is it taken into account?


Disqus replaces the div on page load, then loads the comments asynchronously.

I have Disqus on my Jekyll/Octopress blog and still get millisecond loads.


I intended to type this ;-)

Still, Disqus has basically a no/low-impact


Agreed I moved my blog from Rails to static. I tested pageload speeds with disqus on/off and the differences were negligible.

Disqus loads almost entirely async.


Sure would be. I would expect that to have quite significantly more effect than any of the other changes, probably more than all the other changes put together.


It seems that people on hn just fell in love with go. So many new articles lately. It's like the node.js fever all over again.


Or Ruby on Rails, Arc, or whatever is the flavour of the year.

Additionally it is funny to see the usual comparisons of young developers discovering execution and compilation speeds already possible in 16 bit systems.


I was surprised at how amazed people on one of the recent Go threads was with Go compilation speeds, given that gcc delivers similar compilation speeds for C code per line of code for me on my old, slow home server.

As for 16 bit systems, I'd love to see a comparison with the Turbo Pascal compiler, for example, on modern hardware. Maybe my memory is deceiving me and the program sizes just weren't comparable, but it sure did seem like it was just flying on a 4.77MHz 8086 based PC. It'd be an interesting comparison.

Especially given I remember how frustrated I was with a lot of other contemporary compilers (whether for C, Pascal, dBase or others). The only other compiler I remember fondly for being fast was the AmigaE compiler (by Wouter van Oortsmerssen, the strlen.com / Cube engine guy, who I see is now working at Google on Android gaming - nothing but good can possibly come of that)


Is this true?

I'm not sure about C, but it's definitely the case that Go compiles magnitudes faster than C++, for any reasonable sized project. For example, the ~200k lines of Go standard library compiles in about 14 seconds on my workstation, while random C++ libraries frequently take much longer (just my anecdotal impression from waiting on "brew install").


C++ compiles pretty slow due to templates being in the header files, and the need to re-parse them in every compilation unit.

Also the language is quite complex and requires multiple analyses at parse time to decide what the developer is really trying to do.

C code can be compiled fast if not many optimizations are being made. For example the Tiny C compiler was compiling the Linux kernel around 15s in 2004, not sure about which modules were configured though.

Any proper compiler for a language with modules should anyway be able to beat C and C++ compilers hands down.


The slowness of C++ compilation is more complicated than you imply. Walter Bright details the causes here: http://www.drdobbs.com/228701711


I know, but that is why Walter's information is an article and not a plain simple post. There is too much compiler related information to discuss.


That was also my experience with Turbo Pascal, having used all MS-DOS versions and the 1.5 Windows version.

Other languages with module systems also compile quite fast.

It would be interesting to see a table of compilation speed comparisons of compilers for languages with module systems for applications of an considerable size.


I remember E on the Amiga (http://strlen.com/amiga-e) from Wouter van Oortmerssen being blistering fast at compiling even on a 68020.


Hey, you forgot to mention Oberon! :-)


You get an upvote :)


I thought the flavor this week was Julia. or Elixir. We are so fickle.


Neither has had many posts on HN. Nothing compared to Go or Node or RoR back in the day, anyway.


Someone should write a HN post subject tracker to keep track of it all. And keep forward-porting it to the flavor of the week, of course.


How did you achieve a 16 second response time for your blog? What the heck were you doing?


Yeah, it's not surprising that any change would make it faster. Plot twist: OP was previously typing the server response BY HAND.


Those cookies can be a real pain if you make a typo.


On my site I figured out abnormally high load time is caused by under powered mobile devices accessing the site.


This struck me as really weird too. I Was half expecting "and I removed FullHDLogo.BMP but it's probably Go!" toward the end.

16 seconds is crazy slow, a sure sign that previously Something Wasn't Right.


That's a graph from webmaster tools. It may be that the Google Bot is hitting some pathological case with the Django (e.g. server-side sessions) or Disqus. I don't think he ever tested the latency directly.


I built a small site in Go and I didn't really see the need to use a framework. Here's what I did for routes.

  func getRoutes() map[string]customHndlrFnc {
	  r := make(map[string]customHndlrFnc)

	  //routes
	  r["/route_to_url"] = handler 
	  r["/route_to_url2"] = handler2

	  return r
  }

  for key, value := range getRoutes() {
	http.HandleFunc(key, handlerWrapper(value))
  }
All of my routes for this sample were get requests but it could easily be extended.


Why do that versus:

  http.HandleFunc("/route_to_url", handlerWrapper(handler))
  http.HandleFunc("/route_to_url2", handlerWrapper(handler2))
Is the map used elsewhere?


After playing around with an Arduino in my spare time, I've realized how important it is to start thinking in multi-threaded concepts in programming. Nothing makes a better example than watching delay(); physical prevent your sketch from taking the next step.

Golang (Still can't believe Google would release a language so piss poor for SEO) WILL be the language I pursue when I start down this path, but right now I don't have any projects that force me to start rebuilding my libraries from scratch.


Save yourself the pain and go with Scala and the JVM ... mature platform with battle tested GCs, all the libraries and concurrency idioms you'll ever need and a modern FP language designed for scale.


Only if you don't mind paying approximately an order of magnitude penalty in RAM usage, which has long been an achilles heel of JVM languages.

http://benchmarksgame.alioth.debian.org/u64/benchmark.php?te...


In a server-side context the JVM tends to be the most memory efficient out of all platforms that make use of GC. Here's for example the comparison of Ruby MRI with JRuby, showing a massive difference in memory usage, and yet many Ruby developers choose to deploy their apps on top of JRuby precisely because of much better memory usage, which is why your reference to these benchmarks is basically useless:

http://benchmarksgame.alioth.debian.org/u64/benchmark.php?te...


It uses exactly as much memory as you specify. It would be pretty stupid to have plenty of RAM and NOT using it. With enough memory GC becomes essentially free, by not having to do any work.


Nonsense. I'm talking about the SAME data taking more memory to represent. Plus, all things being equal a larger memory space will take more time to GC, and GCs become longer, not shorter.

The JVM does things suboptimally in a number of ways. Object memory overhead is relatively high - typically 8 or 16 bytes per object, strings as UTF-16 (vs Go's UTF-8), etc. It really adds up, and funtional languages tend to churn lots of GC.


This is the first time I ever heard anybody complain about UTF-16, citing it as the reason for why the JVM consumes more memory, which is of course bullshit.

One reason for why JVM apps appear to consume so much memory is because the JVM allocates more memory than it needs. It does so because the garbage collectors are generational and compacting (well, CMS is non-compacting, but when the memory gets too fragmented, the VM does fallback to a stop-the-world compaction of memory). In a server context the JVM also does not release the free memory it has to the operating system, because allocating that memory in the first place is expensive and so it keeps it for later usage ... the result is that memory allocation on top of the JVM is as efficient as stack allocation, since all the JVM does is to increment a pointer, and deallocation of short-lived objects is nearly as innexpensive as stack unwinding, since the GC is generational and deallocates stuff in bulk. These capabilities come at a cost, as the GC also needs memory to move things around, but it's also memory well spent (e.g. Firefox has been plagued by memory fragmentation for years).

Speaking of Golang, it has some of the shittiest GCs in existence ... non-generational, non-compacting, non-concurrent and "mostly precise". Of course, it used to be fully conservative, which meant you could easily end up with really nasty memory leaks on 32bits systems, because the GC wasn't able to make a difference between Ints and pointers.

The only thing saving Go is the ability to work in many cases with the stack, alleviating the stress that the GC faces, but this low-level capability will also prevent it from having a good GC anytime soon, like the JVM has had for years.

And really, in terms of memory usage, you should see how the JVM's CMS or G1 behaves in production, as it makes all the difference. Our web-service (written in Scala) that's being hit by more than 30,000 requests per second that must be served in under 100ms per request, was at some point hosted on Heroku, using only 8 instances, instances which have less than 400 MB of usable RAM. It was absolutely superb seeing it in action, as the processes themselves were not exceeding 200 MB of real usage - Heroku even told us that we are costing them more than we are paying them.

So yeah, you can talk bullshit about strings being UTF-16, but the real wold disagrees.


> strings as UTF-16 (vs Go's UTF-8), etc.

Strings aren't traced, so that has no impact on GC time.


It does, however, impact memory usage quite substantially if your data is string-heavy, which most web-apps are.


Wrong, wrong, wrong and wrong.


The problem is the Mobius strip of reasoning you go through when you start to punish yourself into picking the 'best' language. Different tools for this and that sure...but I think ultimately Golang in the long run will see a more widespread adoption rate.

There will always be better tools, you just can't beat yourself up trying to pre-optimize for each one per project.


Scala problems (in the old days):

- You can't find people to work on it - Unbounded complexity, type-masturbation - Slow compilation (you need a better computer/SSD) - Might as well use Java, the tooling for Java is great

Do you know if those are still true?


It has gotten a lot better on all fronts.


>a modern FP language

Scala is many things, but that is not one of them. Scala is an OO language with some FP influence.


I'm not sure why you wouldn't cache anything. It doesn't matter if x is faster than y. If both were cached the difference might be milliseconds and in the real world that is what will happen.


OT but please don't put the solutions for Project Euler problems on GitHub, it is directly against the rules (http://projecteuler.net/about, section "I learned so much solving problem XXX so is it okay to publish my solution elsewhere?").

You can put the solutions in the dedicated thread on the site though if you want to share :)


First, just a note, you can only see that section if you're logged in.

Second, I disagree that it is 'directly against the rules' - no where there does it say specifically that you must not post solutions elsewhere. I think the intent is there - but if someone simply cribs off another answer somewhere, they're not really learning anyway and are really robbing themselves. Project Euler is altruistic anyway and there's really no 'advantage' to stealing answers. (I guess someone could show off their progress, but I don't think that's much of an advantage, since they aren't really learning anything)

I have learned quite a bit from looking at Project Euler answers from others and am glad they published their solutions. For example, there are many different ways to do #2 in Haskell and it is enlightening to see how they work.


Ok, it may not be "directly against the rules", but it is definitely dicouraged:

    > I learned so much solving problem XXX so is it okay to publish 
    > my solution elsewhere?

    It appears that you have answered your own question. 
    There is nothing quite like that "Aha!" moment when you finally
    beat a problem which you have been working on for some time.
    It is often through the best of intentions in wishing to share our 
    insights so that others can enjoy that moment too. Sadly, 
    however, that will not be the case for your readers. Real 
    learning is an active process and seeing how it is done is a 
    long way from experiencing that epiphany of discovery. 
    Please do not deny others what you have so richly valued yourself.


let me guess, you only managed to solve "publicly available" problems right?


In the interests of full disclosure, I have only answered 3 problems on Project Euler, and I did them all myself, because I have the willpower to not look at other answers before creating my own. (I have the same username there: http://projecteuler.net/profile/jcurbo.png) Anyone looking to learn something and not just check boxes will do the work and exert the same willpower to ignore solutions already out there.

The core of this problem exists in education at all levels; the learner must be coerced or convinced that it is in their best interest to actually learn the content rather than cheat.


So um, Euler owns "Largest prime factor" now?

Seriously the web is big, github is big, Euler is pretty big. Overlap is unavoidable. The [thing,] I'd think would be to ask for no direct linkages. "Go ahead, share your prime factor code, but for goodness sake [do] not document it as a 'Euler Problem.'"


For the record, this is the same graph after moving my Drupal based blog from Media Temple to Linode. Nothing else changed. So yes, hosting can make that magnitude of a difference. http://i.imgur.com/8esmvJS.png


You forgot to provide an RSS feed of your blog posts.


~15 secs to load a blog post??? Sorry - that's not a problem w/ Django. Something else goofy going on here. Whether or not Go/Revel is ninjarockstar faster than Python/Django, I don't think this is the benchmark to prove it.



I think the comparison is a little misleading as the author admitted that the newer version is slimmer and tighter. Django is heavy. It'd be interesting if somehow the author could have used Bottle (Python) and compare it to Revel (Go).


|and I decided to give it a go.

Never gets old.


Anytime you move rendering off the client (Disqus) you'll see performance increases. Everyone is focused on server-side speed, though most of the load time is network latency and DOM rendering.


Heroku, MongoDB, Go, Revel, Bootstrap, with a sprinkling of overkill..


Your absolutely right, this article sucks... No mention of mythical horselike one-horned creature whatsoever. :)


How many Go articles do we need a day? Google must be desperate.

Tomorrow: How I rewrote my bedroom and kitchen using Go and saved lives.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: