Hacker News new | past | comments | ask | show | jobs | submit login
Starbucks Does Not Use Two-Phase Commit (eaipatterns.com)
84 points by jteo on July 28, 2010 | hide | past | favorite | 17 comments



This is an exceptionally well written and accessible essay (I wish I could write like this!). There's another great issue dealing with ultimately inability to scale atomic distributed transaction across rows by Pat Helland "Life Beyond Distributed Transactions":

http://www-db.cs.wisc.edu/cidr/cidr2007/papers/cidr07p15.pdf (warning: PDF)

Amongst the points discussed, it talks about how real life transactions from getting a coffee at Starbucks to buying a home aren't atomic but are rather based on "workflows" with promises and budgetting for worst cases.


Wow—that's a great way to explain those concepts to the layman.


I don't know about the layman, but it's definitely a good way to explain them to programmers who haven't been exposed to them.


Yeah, you might be right... I probably need to talk to more laymen. :-)

Maybe I should have said "managers with a decent technical background" instead.


Great article. I think this article was part of 'Best of Software Writing I' by Joel Spolsky.

http://www.joelonsoftware.com/articles/BestSoftwareWriting.h...


And here are the links to essays in Best Software Writing I: http://brevity.org/misc/bestswi.html


One problem Starbucks around here seem to have is that they will inevitably make cups of coffee for mystery customers who never pick them up. I've seen as many as 3 cups of very precisely crafted coffees on the counter with nobody else in the store.


I've had to abandon coffee at the Starbucks on market street in San Francisco several times. I'd go in with 12 minutes until my bus came, and they weren't able to make a coffee that fast.


Perhaps your Starbucks barista is pulling a Jerry Glanville, the NFL coach who would leave game tickets at will-call for Elvis.


Does anyone know if there is a library for this sort of communication for web communications? I'm working on my own for a web application I'm writing, but it would be better if one already exists. (If not, I've been planning on open-sourcing mine when it's usable...)

Incidentally, I explained this exact concept to a few friends who I talk to about my project, except I used Pat & Oscar's! You order and they give you a number, and you go sit down. Then when your order is ready, they come to you by finding the table with your number on it. A benefit to this analogy is that you can have multiple outstanding "orders", simply by having multiple numbers on your table.

I've been meaning to write a post about how I plan on using this, I just haven't felt motivated. Maybe I will now!


Mongrel2 is still in early development, but may be relevant to you. There's a chat demo in the source repository.

http://mongrel2.org/home


Sure, that looks interesting. I'm aiming for more of a layer built specifically for asynchronous, job-based requests though. In that chat example, it's handling the message routing itself, for example, and it doesn't appear to be using a separate connection for responses. What I'm looking for is a layer that handles these things for you, and you just deal with processing the message.


I posted what I'm basically doing here: http://jonathan.com/aspects-messaging-model


Have you looked into http://www.tornadoweb.org/?


Yes! It's the first thing I looked at when I started on my project. But honestly, I just can't work in Python. I'm not going to bash it, but we just don't jibe. I found EventMachine for Ruby and have a nice pipeline set up, from nginx (server) down to Cramp (framework).

Still, it's not really what I was asking for. I'm looking more for a job messaging architecture, where the client makes a request and the server asynchronously works on it, and sends it down whenever it's finished. Ideally you have one connection that stays constantly open (or you re-open it whenever it closes) that receives the results, and you use the other connection (since the bare minimum for Ajax connections is 2) to make your requests. That requires cooperation on both sides of the gap.


This is generically how long-polling usually works (you have one fast channel for sending messages, then a persistent HTTP connect (either using longpolling or http streaming) to send back responses/results.

However it doesn't seem like you would need that type of architecture unless you need to push data from the server to the client (like email updates or instant messages). There's no real harm in having the first connection open while the server is processing the request.


Well, yeah. That's precisely what I'm doing. :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: