Transformer Networks Optimized for ChatGPT Mobile
Siri and OK Google were initially a fun introduction to the promise of voice-based control, but we soon realized how carefully we must craft requests to get a useful response. The level of understanding we now see in ChatGPT would be much easier to use, but that capability has been limited to text interaction with cloud-based apps until recently. Now the compelling promise of ChatGPT and the ubiquity of cell phones is propelling a trend to make transformer networks for a ChatGPT mobile a reality, extending the power of large language models to everyone with a phone.
An obvious challenge is that the ChatGPT we know depends on trillions of parameters. Transformer networks of this size can only run in the cloud. Some suggest a hybrid model where a phone or other app does some of the work, connecting to the cloud for heavier duty inferencing. However, a casual phone-based user may not appreciate the long latencies and privacy risks inherent in a hybrid solution. A better approach would allow for running most or all of the transformer network load directly on the phone, turning to the cloud only for occasional anonymized search requests if needed.
To read the full article, click here
Related Semiconductor IP
- Xtal Oscillator on TSMC CLN7FF
- Wide Range Programmable Integer PLL on UMC L65LL
- Wide Range Programmable Integer PLL on UMC L130EHS
- Wide Range Programmable Integer PLL on TSMC CLN90G-GT-LP
- Wide Range Programmable Integer PLL on TSMC CLN80GC
Related Blogs
- Unveiling Ultra-Compact MACsec IP Core with optimized Flexible Crypto Block for 5X Size Reduction and Unmatched Efficiency from Comcores
- New Armv9 CPUs for Accelerating AI on Mobile and Beyond
- SiFive Accelerates RISC-V Vector Integration in XNNPACK for Optimized AI Inference
- From ChatGPT to Computer Vision Processing: How Deep-Learning Transformers Are Shaping Our World