Hacker News new | past | comments | ask | show | jobs | submit login

> Paper weaknesses

> Evaluated using prototypes of protocols in Go

Retarded. This is why I left academia.




Eh. (blog author here). It was exceptionally annoying, but in the end, the paper got in. You have to develop a thick skin, but I've found that in the long run, the process works pretty well. The feedback we got from our first submission to OSDI was awesome, and helped seriously improve the quality of the research, not just the paper. (It led us, for example, to adding the formal proof and the machine-checkable proof.) The NSDI feedback was crappy. I'm disappointed, because I like NSDI a lot. I think people were just having a bad-Paxos day. The final SOSP feedback was again really good.


I'm someone who's happy you put the proofs in there.

As soon as I read "has no designated leader" I stopped reading and skimmed through to make sure safety and liveness proofs were in there somewhere before I spent any significant time reading the rest.

This feels like it's probably a nice advancement over S-Paxos and I'm looking forward to getting through it -- thanks for making it accessible.

Also, this is awesome: "Copyright is held by the Owner/Author(s)"


That last awesomeness is SIGOPS: They bought out the copyright for all papers at SOSP'13 on behalf of the authors. three cheers. All of the SOSP papers are available -- already -- open-access: http://sigops.org/sosp/sosp13/program.html

(For the non-academics: SIGOPS is the chapter of the ACM for operating systems. It runs SOSP and similar things.)


Great work. Do you have a link to the machine-checked proof? In the technical report there is only a formal specification and a detailed but not machine-checkable proof. It would be really great to see systems researchers write machine-checked proofs :)


The specification can be model checked, but we don't have a machine-checkable proof. I agree that that would be very useful, and we'll think about writing one at some point in the future.


Whoops, Iulian's correct - I was being imprecise earlier. We have a formal specification in TLA+ that we put through the TLC model checker for several weeks of runtime on the biggest machine we could find, not a machine-checkable proof.

But to the original questioner, yes, we should put the TLA+ spec online instead of requiring someone to copy/paste from the tech report... please hold. :)


The TLA+ spec is now in the git repository. Thanks for the suggestion!


I think it's a fair weakness. Since Go has a mark&sweep GC, which runs at unpredictable moments, for unpredictable durations, the performance and responsiveness of programs can be significantly affected by unpredictable factors, like small code changes, which might or might not trigger the GC. That is not to say that GC is inherently slower than C++ malloc/free or reference counting, but only that small changes in code can lead to large, unpredictable changes in performance.


The article even mentions this explicitly:

> Getting the performance variation down was a little tricky. In several spots, we had to think carefully (and experiment with some dead-ends) about how to reduce the amount of garbage we were generating.

They do say it was worth it in the end, and that other languages would have similar drawbacks.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: