CompuRAM™ provides In Memory Computing (IMC) that will enable solutions for computing at the Edge to be more power efficient. At present, sensor data often has to be sent from an IoT device to a server for processing, which creates a connectivity requirement and an unavoidable latency. For time critical applications this is not acceptable and so there is a drive to do more computation within the device itself, i.e., AI processing at the Edge. Power is a significant design constraint in IoT devices, and so any extra AI-related computation must be done in a power-efficient way. sureCore’s existing low-power memory solutions already provide a way to add the significant extra memory needed by AI applications without dramatically increasing power requirements. In-memory computing provides further power savings by reducing the need to move large amounts of data around within a chip, as the initial processing of data is carried out very close to the memory array itself.
Our intimate knowledge of memory technology means that we have been able to create a solution for the next technology demand of integrating arithmetic operations within the memory. It is another example of us seeing what the industry will require in the near future and developing a solution that will be ready when the need for AI at the Edge becomes mainstream. Cutting power consumption is what we do as we have proven with our existing technologies, such as our EverOn™ and PowerMiser™ SRAM families that enable near threshold operation and 50% dynamic power cuts respectively. Our solutions are all designed to making it possible to create products for the next generations of ultra-low power applications that could not exist without their power reducing techniques.
In the same way that on-chip memory is better, faster and more power efficient than transporting data back and forth to off-chip memory, integrating memory and compute capability offers even more significant power saving benefits. sureCore’s in-memory compute technology achieves this integration by embedding arithmetic capability deep within the memory array in a way that is compatible with its existing silicon-proven, low-power memory design.
In-memory computing
Overview
Applications
- AI processing at the Edge
- IoT application
- Ultra-low power applications
Technical Specifications
TSMC
Pre-Silicon:
22nm
Related IPs
- Low power 32-bit processor with lightweight computing power
- AIoT processor with vector computing engine
- Fractional-N PLL for Performance Computing in GlobalFoundries 22FDX
- Fractional-N PLL for Performance Computing in GlobalFoundries 12LPP/14LPP
- Fractional-N PLL for Performance Computing in Samsung 8LPP
- Fractional-N PLL for Performance Computing in Samsung 14LPP