Yes. The data modeling was by far the hardest part of the app. If you look in the repo you'll see my first attempt was using a separate index table, which was convenient, but expensive. The current data model is quite nice in that the number of users doesn't matter - we don't keep track of who subscribes to which feed. But because of that, when getting the unread story list, we must do 1+N queries (where N is the number of new feeds). However, since we're using go and the datastore, all that work can be done in parallel, so it's not too slow.
> However, since we're using go and the datastore, all that work can be done in parallel, so it's not too slow.
I thought we couldn't run routines in parellel yet. Per:
'The Go runtime environment for App Engine provides full support for goroutines, but not for parallel execution: goroutines are scheduled onto a single operating system thread. This single-thread restriction may be lifted in future versions. Multiple requests may be handled concurrently by a given instance.' [1]
That description is misleading. All it means is that one go routine is executing at the same time between all requests. Up to 10 requests can be processing on any go instance. However, most of the time the go routines are just waiting for data to come back. Hence, we can fire up a bunch of go routines to do some work. It's the parallel vs concurrent stuff that Rob Pike has spoke about before. go app engine isn't parallel, but it is concurrent.
I'd be very interested in reading a blog post about the data modeling process, especially if you find yourself changing it further as you receive more traffic.