I took a look. Your approach is to detect requests from spiders and respond with plain HTML content rather than the content wrapped in Javascript, etc., that a normal user would get. You address the obvious question, "But isn't that cloaking?" by saying no, the content itself is the same, so nobody should object. Fair enough, I happen to agree with you, but our opinion is irrelevant; what matters is whether Google consider this practice legitimate. Can you (or any HNer) tell me definitively whether they do or not? And has their policy changed recently with the introduction of this new crawlability spec?
Disclaimer: I'm a co-founder of NOLOH.