I deleted Siri off my phone a couple of days ago (had it since launch). I used it a few times with fairly poor results and ended up finding Google a whole lot more useful. Maybe I was using it wrong or had unrealistic expectations.
So is "the API web" that Robert is referring to the new, more practical take on "the semantic web?" The hazy parts of the semantic web being replaced with programmers writing apps to piece together information from APIs, of course.
This was my first thought when I realized Siri is like an agent in semweb-speak.
Does anyone know of any Google acquisitions in this space? Aardvark comes to mind, but it wasn't an autonomous system... perhaps their NLP work was state of the art?
Back in October, SIRI's execs admitted that they had problems handling access to more than 6 APIs at one time due to "cross talk" problems with the NLP. So, they face what we all do with NLP when an "agent" has to handle too much.
The value here is in applying SIRI to very small one-off Apps to make them more intelligent.
Yes, other startups are definitely working in this space -- and, it's important to note that SIRI is only a licensed technology, not a proprietary technology, whereas other companies have wholly owned IP. If you search for AI and AI, mobile, Interactive Conversation, Mobile and Retail to mobile, you'll probably find the Others working in this area.
Apple is probably wanting the semantic analysis technology behind Siri, which is supposedly pretty amazing. The natural language processing is probably something they're interested in too. They will probably integrate Siri's functions into the iPhone to expand into a personal assistant-like function. For example, they can add a "speak" button to the search page of the iPhone, and have users search through their phone (contacts, etc) or ask the iPhone a question, "What movies are playing around me?" They need this functionality to compete with Android, especially with the new Android versions having voice search and integrated turn-by-turn. I can see Apple developing their own voice recognition technology and not using Nuance. I think Apple can do a better job than them. And it makes sense for Apple to integrate turn-by-turn directions somehow, just to compete with Android. They could allow Google to provide turn-by-turn on the iPhone, but I don't think they want iPhone users that dependent on Google. So many iPhone users are already using Google for mail, calendar, contacts, RSS reader, etc. Apple needs to find a way to pull people away from reliance on Google and be more connected with their services.
I don't see anything truly groundbreaking about the app(I'm not saying its a bad app (I have never used it)). Nothing that apple would see worth buying at least.
Siri does not seem to have anything that Apple could not have just reproduced. Apple could have paid a fraction (assuming the $200mil this author estimates) to write the app and then just banned siri from the app store as reproducing existing features..
The argument that Apple could have paid a fraction to write the app is fundamentally false.
If you look at all the acquisitions happened between Google recently, how many of them Google can build with a fraction of the money they have paid for?
And, that's why a good programmer produces 10x more value that a normal programmer. But, how to hire a good programmer? Acquisition is definitely one of the methods to hire a good programmer.
Siri seems to be using Nuance (http://www.nuance.com/) for speech recognition, so Apple doesn't seem to have bought any of the actual speech recognition technology.
I don't specifically mean speech recognition (although that could use an improvement), but some of the other features Siri would bring to that specific part of the iPhone.
It will be interesting to see whether this is about the app (i.e. implementing similar functionality somewhere in the iPhone OS or even offering it as a extra download) or about the technology (i.e. improving voice control). I bet it’s rather more the last thing.
I don't think it is about the technology (improving voice control) because as you can see in the video the speech recognition is from Nuance.
My guess is that it is about the integration of all these webservices and APIs and the agent behind it. There is some research going on at Stanford about this, here is a presentation http://logic.stanford.edu/talks/Wizard/
The sad part of this story is that I used both Audion and SoundJam back in the day, and Audion was a far superior media player at the time. It's a shame the Panic guys didn't return Apple's call and thus lost the opportunity for Audion to become Apple's new media player centerpiece (iTunes).
This plus the leaked facebook integration makes me think Apple's going to try to branch out from its walled-garden semantic database (mobileme, mail.app, etc integration) and into a more open integration of popular services (facebook, yelp, gmail, opentable, etc).