droopyEyelids was correct in saying that I was talking about software innovation from a CS standpoint, so I didn't respond to the first reply to my comment. After reading your comment, I realized that I should make it clear that I'm using David A. Wheeler's definition of software innovation [1]. It was linked to on HN a few years ago[2]; it has an excellent listing of important innovations. The list was made to show that the most important computer innovations weren't patented.
Wheeler does include the Internet (internetworking using diagrams, leading up to the Internet's TCP/IP)as an important innovation.
On the page I linked to [1], Section 6 is called "What is not an important software innovation?", which inspired my post in part. I'll quote a portion of it below:
" As I noted earlier, many important events in computing aren’t software innovations, such as the announcements of new hardware platforms. Indeed, sometimes the importance isn’t in the technology at all; when IBM announced their first IBM PC, neither the hardware nor software was innovative - the announcement was important primarily because IBM’s imprimateur made many people feel confident that it was “safe” to buy a personal computer.
"An obvious example is that smartphones are not a software innovation. In the mid-2000s, smartphones rapidly became more common. By "smartphone" I mean a phone that can stay connected to the Internet, access the internet with a web browser capable of running programs (e.g., in Javascript), and install local applications. There's no doubt that widespread smartphone availability has had a profound impact on society. But while smartphones have had an important social impact, smartphones do not represent any siginificant software innovation. Smartphones typically run operating systems and middleware that are merely minor variants of software that was already running on other systems, and their software is developed in traditional ways.
"Note that there are few software innovation identified in recent times. I believe that part of the reason is that over the last number of years some key software markets have been controlled by monopolies. Monopolies typically inhibit innovation; a monopoly has a strong financial incentive to keep things more or less the way they are. Also, it’s difficult to identify the “most important” innovations within the last few years. Usually what is most important is not clear until years after its development. Software technology, like many other areas, is subject to fads. Most “exciting new technologies” are simply fashions that will turn out to be impractical (or only useful in a narrow niche), or are simply rehashes of old ideas with new names."
I wonder, if even with this incredibly strict definition of "Software Innovation" whether anything on a smartphone might still rise to the level of innovation?
I'm thinking of the GeoLocation Applications - that just never existed before, ever, like runkeeper. Or the BodyTelemtry apps, like FitBit. I'm still trying to think if it's possible to categorize Dark Sky as innovation. Or the always-available voice-recognition of Siri.
None of it rises to the level of "Software innovation from a CS standpoint" if we look carefully? Honestly asking now.
By LowKarmaAccount/David Wheeler's definition, no. If you browse the list in Wheeler's article, the only things that jump out at the "application feature" level are word processing and the spreadsheet. Clearly, Runkeeper, Dark Sky, and the like are not going to make that list. It really is focused on the "pure CS/EE" side of things, and since smartphone app development is mostly business as usual with a few different quirks, none of it is likely to qualify.
I'm thinking that something like "robust speech recognition and parsing" might fit on there (though it's tough to determine when such a system would be considered mature enough, and simultaneously differentiated from full strong AI). But that's not a smartphone innovation, it just happens to be a good use case.
It really comes down to the definition/interpretation of "innovation," and I think many people (myself included) would feel a slight when certain things are excluded, but I can see how "taking things that already exist and putting them together in new ways or usefully extrapolating on them" (which is what any "innovative" app has done) doesn't really fit. As he explained about the smartphone, something can be world-changing in a very real way, without necessarily being innovative.
To respond to the quote about "few software innovations identified in recent times", there's been plenty of innovation in the last decade or so. It might not be quite as fundamental as the innovations of the 20th century - perhaps because all those fundamental things needed to be invented but now have been - but it's there. These are not directly smartphone related, because I agree that most of the innovation of smartphones in particular has been in areas other than computer science, but your posts and the page you linked to are more broadly critical of progress in computer science, and most of them are present in or related to smartphones. Anyway:
SMP - parallelism not among separate, isolated computers whose fastest connection is probably an Ethernet port, but among multiple cores in the same die, accessing the same core RAM. Of course someone had a SMP system decades ago, it's not that complicated an idea, but only recently has it become ubiquitous and critical to taking continued advantage of Moore's Law. Although it's fundamentally a hardware innovation, the ways we write multithreaded programs have evolved to take advantage of it - it's only recently that the sort of software used on millions of systems has had to focus on fine-grained locking and lock-free algorithms, rather than slapping big locks on everything with the only downside being latency. And more unorthodox approaches are gaining traction: CSP, though invented 35 years ago, is being rediscovered with Go, various languages have experimented with software transactional memory (e.g. PyPy, Clojure), and Intel's new processors will finally bring hardware transactional memory mainstream, which might finally realize the dream of fast transactional memory.
GPUs - massive parallelism of relatively small, minimally branching algorithms, again on a single chip found in millions of devices; again, a hardware innovation that requires new ways to write software. Yes, I know how old the transputer is, but now it's mainstream.
Virtual machines - a new consummation of the idea of time sharing, innovative in practice if not theory. It's my personal opinion that they're a massive hack, a poor man's multi-user system that accomplishes no more than a traditional kernel could have, with the right operating system design, with all the kludginess you'd expect from a system based on hacking kernels designed to run directly on the hardware into running side-by-side - but when disk space and RAM became cheap enough that it became obvious that each user of a server should have their own isolated copy of all software, allowing them to install and maintain whatever versions of whatever packages they need, Unix had developed so much around the idea of a central administrator that the new paradigm had to evolve rather than being designed. But who cares? Worse is better, the heritage of Unix and perhaps its conqueror - however it came about, ordinary users of multi-user systems now have more power on average than ever before. Consider the difference between a PHP script hosted on shared hosting and a modern webapp stack. And maybe a new design will come around to replace it all, one of these days.
Closely related, cloud computing - I suppose driven by the decreasing price of hardware. The idea of computing as a commodity is hardly new, but in the last few years it has become a reality: virtual servers can now be spun up and down in minutes, as part of massive server farms provided as a service to a huge number of users, for low cost. This is fundamentally changing the way software is designed: scalability is easier than ever, but it has become more and more useful to write distributed systems that can tolerate internal failure.
HTML5. You can enter a URL and instantly and safely run fast code. Yes, it's just a another VM; yes, Java had this a long time ago. But we avoided some of Java's mistakes and CPUs are faster, so who knows, write-once-run-anywhere might become a reality this time.
Sandboxing. We might still be stuck with C and unsafe code, but sandboxing is starting to make software a lot harder to exploit anyway. Software security in general is receiving a lot of attention these days.
Functional programming and other languages with strong static type systems have had a resurgence lately. Going back a bit farther, you could say the same about dynamic programming languages such as Python, Ruby, and JavaScript. There are so many different languages which have taken different paths that it's hard to identify a single clear direction that programming languages have gone in, but there are some commonalities, and they add up to a significant amount of incremental innovation. There is a lot more to say about this, but I'm getting tired and it would take a lot to do it justice.
Ways to write software: test driven development, agile, etc.
Wheeler does include the Internet (internetworking using diagrams, leading up to the Internet's TCP/IP)as an important innovation.
On the page I linked to [1], Section 6 is called "What is not an important software innovation?", which inspired my post in part. I'll quote a portion of it below:
" As I noted earlier, many important events in computing aren’t software innovations, such as the announcements of new hardware platforms. Indeed, sometimes the importance isn’t in the technology at all; when IBM announced their first IBM PC, neither the hardware nor software was innovative - the announcement was important primarily because IBM’s imprimateur made many people feel confident that it was “safe” to buy a personal computer.
"An obvious example is that smartphones are not a software innovation. In the mid-2000s, smartphones rapidly became more common. By "smartphone" I mean a phone that can stay connected to the Internet, access the internet with a web browser capable of running programs (e.g., in Javascript), and install local applications. There's no doubt that widespread smartphone availability has had a profound impact on society. But while smartphones have had an important social impact, smartphones do not represent any siginificant software innovation. Smartphones typically run operating systems and middleware that are merely minor variants of software that was already running on other systems, and their software is developed in traditional ways.
"Note that there are few software innovation identified in recent times. I believe that part of the reason is that over the last number of years some key software markets have been controlled by monopolies. Monopolies typically inhibit innovation; a monopoly has a strong financial incentive to keep things more or less the way they are. Also, it’s difficult to identify the “most important” innovations within the last few years. Usually what is most important is not clear until years after its development. Software technology, like many other areas, is subject to fads. Most “exciting new technologies” are simply fashions that will turn out to be impractical (or only useful in a narrow niche), or are simply rehashes of old ideas with new names."
[1]: http://www.dwheeler.com/innovation/innovation.html
[2]: https://news.ycombinator.com/item?id=813110