Hacker News new | past | comments | ask | show | jobs | submit login
Thoughts on Go after writing 3 websites (kowalczyk.info)
153 points by kkowalczyk on Dec 31, 2012 | hide | past | favorite | 56 comments



> If you know the shape of [JSON] data, you can serialize (marshal, in Go’s parlance) data structures to JSON or XML and de-serialize (unmarshal) from JSON or XML into a struct.

I wrote a short script to automate generating the struct definitions, since I didn't want to generate the structs by hand. Go's reflection made this really easy.

https://github.com/chimeracoder/gojson


Nice, I was about to write one tomorrow.


If you're interested, there's still some low-hanging fruit left, namely in two edge cases:

1. It assumes all slices to be of type []interface{} by default, though oftentimes a more precise type is possible. (This is just a matter of looping through and determining the most specific type)

2. It only accepts valid JSON. Not a bug, but for example, the sample responses on the foursquare API documentation omit the (required) quotation marks around identifiers: https://developer.foursquare.com/docs/venues/venues (click "Try it Out"). That's a common problem which I'd like to fix eventually.


PHP has json_decode() and json_encode() which do all of this correctly ... GO is interesting, but production ready? Not for awhile.


Package encoding/json from the standard library can decode to objects ("Object" in the Java sense, "void *" in the C++ sense) like json_decode does.

It also can decode directly into typed structures. An example from the documentation (you'll have to open the linked example): http://golang.org/pkg/encoding/json/#example_Decoder

Anecdotally, a typed version of JSON decoding is much nicer to work with.


"Anecdotally, a typed version of JSON decoding is much nicer to work with."

How so? Considering that JSON isn't strongly typed.


Indeed JSON is not strongly typed. However, in practice, every API, every data exchange format I've ever seen or used has used JSON in a strongly typed fashion. i.e. they say "the returned object will have a key of 'foo' with a number as the value, and a key of 'bar' with an array of strings as the value.

Why is this nicer?

Let's say you are working with the Reddit API. You can, of course, do it the way you suggested, and just decode the result into an untyped object. However, I find this incredibly brittle. You end up with strings indexing into arrays all over the place.

On the other hand, when you decode into a Go struct, you use the regular dot syntax to access parameters. You can't misspell these or use the value inappropriately (trying to loop over a JSON number, for example). You can write comments on the fields to document what you expect various values to hold. It is nicer in the same sense that using classes is nicer than just using arrays for everything.

Another benefit is of course the improvement of performance. Accessing an string value in a JSON object by a key is the equivalent of a hash table lookup in Python or PHP. It is incrementing a pointer by a constant number in Go (assuming you use strongly typed decoding).

I'll say it again, Go's encoding/json can do what you want (the equivalent of json_decode). There are better alternatives available to users of that package, and the GP was discussing how to exploit one of those better alternatives without having to write the struct by hand (which I personally find a trivial one time cost).


json_decode() without the second param or with it as false will create an object for you that is typed, loosely only if you mess with it afterwards. So what is the advantage other than speed? If I'm CPU bound, then yes, it's worth a look just as many other alternatives are worth a look. But that is a whole different issue.


Speed is pretty important.

Compiler telling you about a bug because you have a typo in your code (as opposed to blowing up at runtime, sometimes in a way that you won't notice) is also important.

Struct definition is also a documentation of your data. Another developer (or you a month from now) doesn't have to go back and re-read, say, Redit API docs. He (or you) can just re-read definition of the struct and know all the data that a given JSON request provides. That's nice.

An editor can parse the struct definition and give you auto-complete, which is also nice.


Really? Another static vs dynamically typed language argument? Not this year.


> json_decode() without the second param or with it as false will create an object for you that is typed, loosely only if you mess with it afterwards.

First, let me point out that we have moved on to a slightly different topic. The tools that json_decode gives you is available in Go (it would be syntactically awkward, out of a necessity born of Go being statically typed). An example: http://play.golang.org/p/OvshdY1oVQ

However, I would write it as follows http://play.golang.org/p/pODFi9Jrve. This is merely a different way of doing the same thing.

Now, to the topic that we are talking about is the claim that the latter is nicer. I was not comparing PHP and Go when I said that it was nicer (although I have some experience with PHP). I was drawing from my experience with working with JSON in statically typed languages. In particular, working with JSON in C++, Objective-C and Go. In all of these languages, the JSON libraries I used were essentially dynamically typed. I claimed merely that having the JSON library use reflection to deserialize into typed structures is nicer than using NSDictionary's various methods or maps in C++.

In PHP, the syntactic overhead of walking dynamic structures is obviously much lower than the languages I was thinking about, so that criticism is blunted when referring to that language. But I still believe there is some advantage there. A large part of what I find nicer about the latter is indeed the same reason I prefer static typing to dynamic typing. It is impossible to mistype the latter (try changing the ".Foo" to ".Fo" and trying to run the snippet). Consider that doing the equivalent in PHP will simply return NULL (I think), as it is really just doing a dictionary lookup. I fully admit that this is a static versus dynamic thing, not about JSON in particular.

As to the performance thing, I think you too easily discount the importance of low latency. But regardless, I was really comparing to Go's statically typed brethren. They essentially use hashtables for representing parsed JSON objects which is far less than ideal. Note that they don't employ the clever tricks that dynamic languages use. V8, for example, makes JS's objects (which are essentially hashtables) perform significantly better than a mere hashtable.


All good points. It would be interesting to see the different JSON implementations in different languages benchmarked. As far as static vs. dynamic, both have their pros and cons.


I think it's nice that you can basically document the expected response, in the form of creating a type with a struct.

In pseudo-code-ish:

    type FacebookUser struct {
        ProfileID int
        Name string
        ....
    }

    json.Unmarshal(response, FacebookUser)
Compare it to the implementations that just give you a hash:

    json_decode($response);

    json.parse response
Yeah, you can document it with comments, or visit the API documentation, but you can fix the code and let your comments go stale. You fix both your documentation and your implementation in one go in the first example.

It's certainly why I find it more appealing, anyway. Makes me feel more secure in my code.


PHP is dynamically typed, Go isn't.


And this really has no bearing on a discussion of JSON, a data exchange format.


Which is odd, since I thought this discussion was about Go, not PHP.


And you confused yourself. This thread is about JSON support in GO.


FYI: Go is not an acronym.


FIY is an acronym. Your turn.


Fix It Yourself?

Anyway my point, in case you missed it, is that "GO" is not correct.


I've just worked my way through the Go Tour and have begun working on my first real project in it (a basic wiki, for learning purposes). At first, I was expecting C with some syntax changes, but the changes made feel like they were almost entirely made for legitimate reasons. Types definitions after variable names really does make sense when you start to use it. Even with my limited experience in it, I can say that it is really a joy to develop in.


Thanks for posting this, the code is very readable and good for learning. How much time did you spend on Go before you finished these projects?


I can't really quantify it as it was in drips but in general, since I already knew C and Python, learning Go was easy. I could have started to write a web app right away, following tutorials on Go website and elsewhere.

At the beginning it's frustratingly slow, as it always is when you learn something new, but if you know other C-like languages, picking up Go will be fast. You just have to put the time to learn standard library and Go way of doing some things.


I have some questions. Are you on Linux or Windows? Did you have to use a debugger or no need?


Go has full support for gdb if you want a debugger. You can even hook into go routines.


I develop on Mac, deploy to Linux (Ubuntu). Didn't have to use a debugger so far.


And for those of you who are not cofortable running with net/http (it is a rather simple server), there is always uWSGI support for Go, which I found to be a rather brilliant idea


(I wrote the article and 3 web apps on Go).

I would be much more uncomfortable running on uWSGI than on net/http. As far as I can tell (https://github.com/unbit/uwsgi/blob/master/plugins/go/src/uw...) Go's uWSGI implementation is just a small wrapper over C code inside a dll.

It's not portable (support for cgo on Windows is extremely sketchy).

It also goes against common sense. Any code you can write in C, you can write in Go, except it'll be easier (Go is garbage collected, has built-in arrays, strings, hash tables etc.) and there's drastically smaller chance Go will have a security issues due to memory over-write.

It duplicates lots of net/http functionality.

net/http is well documented, well tested, reviewed by extremely knowledgeable people.

Most importantly, it is being used in production by me and many other people.

Finally, net/http gave me everything I needed to write my web applications. Before I would even considering using uWSGI (or anything else) someone would have to explain what exactly does it provide that is so indispensable for writing web apps.


If net/http (or whatever http wrapper in whatever language) gives you enough batteries for a web application i am really happy for you.

Sadly me (and a lot of other people) need a lot more.

That is why the uWSGI project exists. Things like logging, clustering, statistics, multi protocol support, alarms, time events, request routing and blah blah are added to your app for free (and the funny thing is that you can tune lot of internals without rebuilding your app).

Obviously if not having windows support is a too much high price for you the discussion ends here :)


You have basically just described the advantages of the D language too. Do you plan on trying D?


I did try D when it was in its early days (i.e. D 1) but at the time I was put off by lack of documentation, the standard library split (phobos vs. D) and general immaturity of the system. I'm sure things have improved since then but I really like Go's simplicity.


I've written some hundred thousand lines of production code in Go and D and feel they are very different languages. To sum it up: Go is a better C while D is a better C++.


Run nginx or lighttpd as a reverse proxy in front of net/http.


Running directly off net/http? Isn't that just Node.js? People are comfortable with that.


Uh... no. Ruby on Rails runs off a ruby webserver. Flask has its own built in webserver, web.py has its own built in webserver. Tho these are not languages, rather frameworks, (and in the case of web.py, a library), they have built in webservers

My point? These things came way before node.js, so no, node.js isn't the only one with a built in webserver.

The difference is node.js has a rather performant built in webserver (still, exeperience shows that a properly tuned gevent or netty beats a properly tuned node.js). The idea is that instead of spending time tuning these things, you can just wrap it in a uWSGI container, and let better tuned-out-of-the-box servers like nginx handle the webserving



In the sense of your statement, people are uncomfortable with compulsory concurrency for everything in Node.js off http not http itself.

Though you have to look at the package api yourself to decide whether it's running directly or not.


Is 7776000 pageviews/month the kind of traffic that you'd expect to be able to generate $80/month? $0.00001/pageview/month? If the average user viewed 1,000 pages a month, that would be $0.01/user/month.

Obviously it depends on the average pageviews per user, but what's your gut feeling? Should $80 have been too expensive?


Your math, while precise, is meaningless.

I probably could have gotten away with paying $0 if I didn't care about the best performance.

App Engine is distributed. You get one front-end instance for free. The more front-end instances you spin up, the lower the latency for your requests (consider a fron-end instance a process, the more processes you have, the more requests you can handle in parallel, which lowers latency for each request).

All the money was spent on additional (3, I believe, front-end instances).

Was the price high? Yes, which is one reason I switched to a $60/month kimsufi server which hosts the blog and still has 99% of capacity free for other things.

It's possible I was over-paying. I'm not an App Engine expert. I picked the number of instances more on a gut-basis than a comprehensive analysis basis. I could afford it so it wasn't a big deal.


Ah, I thought GAE was spinning those instances up for you automatically. That's the default, it spins up extra instances as-needed to keep latency down. With Go apps, that's generally down to i/o from API calls.


$80 is FAR too expensive for 3 requests per second.

If you're paying that much, you've been taken for a ride. That's ridiculous.


Thanks, turns out he manually set it to 4 always-on instances, so we don't know what the GAE scheduler would have done.


I don't understand the deployment section. Go has the simplest deployment I've ever had the pleasure to use. You compile the binary and copy it up. Static binaries make it so simple as to be trivial.


First, if you use C libraries, you can't just compile on OS A and copy to OS B. I'm not 100% sure but while Go has much better cross-compiling story than others, I believe net/http does uses OS's C library and therefore cannot be cross-compiled that easily.

Also, you have to copy the assets (templates and such) and if you're deploying a new version of a web app, you don't just overwrite the binary. My script makes sure that if something goes wrong (i.e. a new version doesn't work or has a bug), it's easy for me to revert to previous version.

That being said, none of that implies that Go makes deployment harder (or easier, for that matter) than deploying code in other languages would be. It's pain in the ass but nothing that I couldn't solve in a day with a script written with Fabric.


>I believe net/http does uses OS's C library and therefore cannot be cross-compiled that easily

For whatever it's worth, I think CGO is only needed for native DNS resolution. If you (cross-)compile with CGO_ENABLED=0, then net/* will fall back to a pure Go DNS resolver, for which the only requirement is a valid /etc/resolv.conf on the target system.


We deploy all the libraries with the application server in a directory called lib. Just throw LD_LIBRARY_PATH=lib before the command line. That will make sure you environment is the same wherever you go. You can also compile with -static if you want to go that route. As for rollback, make your deployment script upload binaries to a directory named the same as the revision it was built from and the symlink it in to place.


My script makes sure that if something goes wrong (i.e. a new version doesn't work or has a bug), it's easy for me to revert to previous version.

Isn't that the point of version control? Any DVCS makes that easy to do.


The point is that there is an existing application in production and a new application is being deployed. If the deployment fails, it simply does not enable the new application, leaving the existing one running. Using version control to rollback a running application is inadvisable (yet quite common).


What about dependencies? Client-side resources such as .js and .css files, configuration files maybe, migrating the data even? A web app is (virtually) never just its server side code.


I wrote gobundle[1] as one solution to this.

[1] https://github.com/alecthomas/gobundle


That's very interesting! Do you just append the bundles to the executable and then have it read itself? I didn't look at the code, admittedly.


It generates Go code which is compiled in to the package.


The original article didn't discuss deploying assets, only the app/code. I suppose that might have been implicit, but I typically handle them separately.


The links for securecookie and mux links are down (redirecting to https://www.gorillatoolkit.org/, which doesn't respond).






Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: