Will the iPhone 8 include a dedicated neural network engine?

As demand for artificial intelligence (AI) increases, chip makers strive to create more powerful and more efficient processors. The goal is to accommodate the requirements of neural networks with better and cheaper solutions, while staying flexible enough to handle evolving algorithms. At Hotchips 2017, many new deep learning and AI technologies were unveiled, showing the different approaches of leading tech firms as well as budding startups. Check out this EETimes survey of Hotchips for a good summary of the event focused on chips for AI data centers.

The future of AI is mobile
Some of the cool stuff that was revealed at Hotchips included a deeper look at Google's second generation tensor processing unit (TPU v2), which was developed to perform efficient neural network training. Also, Microsoft exhibited its Project Brainwave, which is a "soft" FPGA-based deep neural network (DNN) processing unit. These solutions are intriguing for heavy duty data centers that preform cloud based AI, but...

While data centers are important, the most interesting use cases for AI are mobile. Since not all mobile scenarios can rely on the cloud, some of the AI processing needs to be handled on-device or as edge processing. Similar to virtual reality, which has its own set of mobility challenges, AI must be untethered to unleash its full potential. In the rest of this column we'll consider a few of the uses that make it clear that the future of AI is mobile.

Click here to read more ...

×
Semiconductor IP