Making Kynisys: how we're building the future of AI
I joined Imagination Technologies as the executive entrepreneur in residence in June 2019. The company is world-renowned for its GPU IP and more recently neural network accelerators and ray tracing and I knew I could develop a new idea to build on that IP.
This is the sort of tech which requires half a lifetime of specialist knowledge to really get a deep understanding of it. Shortly after joining I felt a bit like I had landed on a more advanced alien planet. Everyone seemed smarter than me.
After the initial learning-shock, I started to see how edge AI and inference on the edge is most likely going to be the future. The benefits of edge AI are about to make the world smarter: it provides speed, efficient data handling and storage, lower systemic costs and no real-time live connection.
Yep, I was sold.
Although edge technology is going to change the world, it is not the most straightforward to deploy. Senior developers and product specialists described to me their current workflow and pain points and I left horrified. Today’s workflow and solutions are far too complicated, perhaps even unknowable. It became clear to me that if you’re serious about deployment on the edge it would require an AI engineering team as big and expert as Imagination Technologies.
Related Blogs
- Navigating the Future of EDA: The Transformative Impact of AI and ML
- DDR5 12.8Gbps MRDIMM IP: Powering the Future of AI, HPC, and Data Centers
- The Future of Driving: How Advanced DSP is Shaping Car Infotainment Systems
- Cadence and Arm Are Building the Future of Infrastructure
Latest Blogs
- Why Choose Hard IP for Embedded FPGA in Aerospace and Defense Applications
- Migrating the CPU IP Development from MIPS to RISC-V Instruction Set Architecture
- Quintauris: Accelerating RISC-V Innovation for next-gen Hardware
- Say Goodbye to Limits and Hello to Freedom of Scalability in the MIPS P8700
- Why is Hard IP a Better Solution for Embedded FPGA (eFPGA) Technology?