Funny, because nearly all of your designs/apps use the same button/gradient/image/border/Gotham/arrow themes for nearly every single UI element. Not saying this is such a bad thing, but it's certainly not pushing any boundaries.
Honestly, I find their new "Zeta" website to be a tremendous improvement that was probably researched heavily by a design team. Basically you took that design, added a couple gradients and pretty icons, and called it "better".
It looks pretty, but did you ever think that maybe that's not /really/ what matters to them?
"All that said, your design is more visually appealing. I just wish it had accompanied a post called "My Interesting Alternate Take On The Zappos Homepage" and not "You're Killing Me, Zappos"."
" they (developers) tried a glossy gradient on a button, and they noticed the traffic went down that day, or conversions sank. so they assume it happens on a grand scale."
This is really not how A/B testing works. There is this small branch of mathematics called statistics which helps us avoid such "one off" errors and is central to A/B testing.
Now A/B testing can (and should) be subject to scrutiny. But know whereof ye speak.
EDIT: The parent entry was deleted so this may not make sense any more.
Any competent front-end designer could easily figure out how to code this.