Amazon has introduced the AZ1 Neural Edge machine learning processor that is baked into its latest products. The AZ1 is a custom processor built in collaboration with MediaTek, and Amazon is positioning it as a key differentiator in making Alexa faster.
Amazon says the AZ1 is an “all-new silicon module that’s purpose-built for accelerating machine learning applications.” The AZ1 silicon offers an all-neural speech recognition model that processes requests faster, making Alexa significantly faster Amazon says it was able to reduce hundreds of milliseconds per response.
Amazon’s preexisting products without the AZ1 send both the audio and its corresponding interaction to the cloud to be processed and back. Comparatively, these new ones that have the AZ1 built in can handle the audio on-device, thus limiting the amount of time it takes to give users a response. And despite how these devices process your voice, how you view your voice history and utterances in the Alexa app isn’t changing.
Amazon says these latency benefits will be available first for English in the US, but it plans to expand to more languages “in time.”
This partnership between Amazon and MediaTek is reminiscent of Microsoft’s partnership with Qualcomm on the SQ1 processor made for the Surface Pro X, at least in terms of big companies co-engineering silicon with chipmakers. In practice, the AZ1 seems more similar to Google’s Neural Core that’s implemented in the Pixel 4. In addition to helping to improve photography, that dedicated chip enables the device to understand spoken English, to the point where it can transcribe recordings in real time without being connected to the internet.
The AZ1 is built into the latest-generation Echo products, including the new Echo and Echo Show 10. The improved interactions will be available initially to English users in the U.S., with other languages and markets added at a later date.