saying smartos gives you persistent storage is like saying docker gives you persistent storage because you can use
docker run .... -v /storage:/storage myapp
That's not what people are talking about. If you run a container on smartos and the physical machine it is running on blows up, what happens to your persistent storage?
technically it still persists, just in the form of hawking radiation instead of bits & bytes. Some people think this is what makes amazon glacier so cheap.
This statement is factually incorrect. C is the most common higher level language across all platforms, especially when you include embedded devices. (I'm not considering Assembly as a higher level language)
That approach is what got us XHTML 2.0 and a "lost decade" for web technology. The things Google adopts (vp8, dart, spdy) are things it releases and publishes as open standards, and therefore significantly more standard and open than e.g. javascript, or even the <img> tag, were when first introduced.
>The idea that nobody can do anything cool on the web unless all browsers support it seems like a great way to encourage stagnation.
But that's the entire point of open web standards. If you don't like using a runtime that is the lowest common denominator across all platforms then why are you using the web in the first place?
I really don't understand people who claim to support the web and web standards but then moan about vendor X or Y not implementing this or that. That's the single biggest defining feature of open web standards; things don't happen unless everybody agrees. If you don't like it that individual vendors have veto power over things then you don't like open web standards. If you don't like technology that moves slowly and by consensus then you don't like open web standards. These are the costs of creating a platform that is defined by open standards.
Arguing definitions is a waste of time. How about this: I like published formats that become standards as and when they gain multiple implementations. If you try to standardize first and then implement, you get CSS2 (or, my first example, XHTML2). The web features we use are there because one vendor or another implemented them, experimentally (again I refer you to javascript, or the <img> tag), and they became standards some time after that. For non-web examples consider something like python - at first, the implementation was the spec; as it matured and things like jython and pypy began to be important, the spec took on more of an independent existence.
This is the model that works, and google is trying to continue it. Best of luck to them.
> How about this: I like published formats that become standards as and when they gain multiple implementations.
That is how the standards process generally works these days.
> The web features we use are there because one vendor or another implemented them, experimentally (again I refer you to javascript, or the <img> tag), and they became standards some time after that.
That was a long time ago, when there were few browsers and the Web was much smaller. Nowadays, whenever a browser ships anything, content immediately starts relying on it, and it becomes frozen for all time. None of your other examples have billions of pieces of content; the probability that some content starts relying on the random corner cases of whatever you ship is almost certain. That is one of the most important reasons the standards process exists: to allow multiple vendors a seat at the table in order to create something that makes sense, as opposed to sitting down, writing a pile of code, having content depend on the random bugs and corner cases in your implementation, and forcing all other vendors to reverse engineer your code. (Being open source does not make reverse engineering free, and doesn't even make it that much easier: the HTML5 parsing algorithm was reverse engineered from IE6 without looking at the source code.)
> This is the model that works, and google is trying to continue it. Best of luck to them.
It's also what got us quirks mode, the content sniffing algorithm, the gratuitous complexity of HTML5 parsing, marquee, blink, and the incredibly underdocumented and hideously complex tables specification. I could go on.
You're portraying CSS2 as a failure, but CSS2 is actually a great example of something that is implementable by virtue of being standardized. CSS2 only looks bad because you can go to the standard and look at the complexity, but automatic table layout (what we had before) is much worse, being defined by a pile of C++ code that few people in the world know, with corner cases and weirdnesses a mile long. To this day, table layout is essentially implemented by reverse engineering Gecko. As someone who has implemented both features, I much prefer the former.
There's a significant difference between implementing something in your browser, publishing the source, putting out demos, etc. and making it a critical part of a primary business web application to the extent that you explicitly wall-off browsers that don't support it.
You want to add new functionality to the web? Great, implement it, make some demos, show us why it's awesome and something we should all implement too. Advocate and demonstrate all you want. But don't make your applications break for anyone that doesn't support your new fanciness.
Lack of competition for Internet Explorer got us the lost decade – Microsoft was perfectly happy with the web not being competitive with unportable desktop apps.
Was that the "lost decade" when we got LinkedIn, Skype, Second Life, MySpace, Flickr, Facebook, Gmail, Google Maps, YouTube, Pandora, Twitter and many of the other things on which we now depend?
Read the comment I was replying to and note that it says “web technology” rather than “web sites”. While people built some great things they did so with significant limitations and had to use features which were not standardized. XmlHttpRequest is a great example – it was used by most of the sites you mentioned but wasn't even submitted as a standard for a full 7 years after it first shipped.
See comment from jordanlev above: "Also hints at the idea that the stability of IE6 for a while actually created a good environment for innovation in the web app space to take place"
2. Note that immediately before your quote, he made the same point I made: “Microsoft intentionally let IE6 development come to a halt because it was no longer strategically beneficial to them”.
3. In addition to reversing your earlier position on the first point, you never stated anything like his second point – just a tangent from the topic in question. You could have fleshed it out into something similar but never did.
4. There's potentially an interesting discussion about the benefits of API stability but that's not conclusively proven – there are many confounds – and there's a separate question of actually specifying behaviours and fixing bugs in the various in-the-wild-versions. As anyone who was working on the web in that era remembers even IE6 wasn't reliably a single target since key features depended on the combination of Windows patches installed on the client. There would have been zero downside had Microsoft more aggressively promoted updates so IE would consistently support HTTP compression, SSL, caching, etc. rather than marking them as minor updates.
if you donate 100 euros do you get anything or is it just supporting the project? Couldn’t quite figure out if by donating you got like an early revision type thing?
If you donate at least 100 euros, you can be sure that there will be a device produced for you (it's built to order, not to shelf). Also, this amount you pay now will already count to the final price, so if you donate 100 EUR now, you'll pay 100 EUR less when finally ordering the ready device.