Hacker News new | past | comments | ask | show | jobs | submit login

Yea, I was hoping for the same.

Specifically I came in curious as to how it knows to "start paying attention" at the prompt Alexa. I assume it must be integral to the local system although I was under the impression most of these artificial intelligence systems did a lot of their speech processing back home and not locally.

Otherwise Amazon would A) be processing a lot of irrelevant conversation and B) be eavesdropping.

So my guess was that the Alexa aspect must be local.

Very neat, even with the kind of weak article I did put myself in the place of "Alexa, call me an Uber please, I'd like to go to the movies" which is pretty neat. Especially since you could work through, "what's playing nearby Alexa?" ..."okay, in 25 minutes call me an Uber for the Avengers Alexa"

I assume you have to say Alexa everytime? But maybe once you're conversing it will keep talking with you? Can you say Alexa at the end? Does it keep track of the last five seconds or so of audial information at any given point for parsing questions?

Lots of interesting challenges.




All wake word spotting (whether for "Alexa," "Amazon," or "Echo," takes place on the device in order specifically to not stream unintended audio to the cloud (for privacy and for separately beneficial reduction in cloud processing costs).


Excerpts from Wikipedia [1]:

"In the default mode the device continuously listens to all speech, monitoring for the wake word to be spoken."

"... Echo only streams recordings from the user's home when the 'wake word' activates the device."

[1] https://en.wikipedia.org/wiki/Amazon_Echo


It's the same as "ok Google" on certain phones... Local dsp chip that only needs to process the wake word.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: