Spectral introduces NeuralRAM, memory architectures in 14nm FinFET tech node targeted for a wide range of AI algorithms

SOMERVILLE, N.J., Sept. 25, 2018 -- Spectral Design & Test Inc. (SDT) today announced rolling out a family of Memory IP on the GlobalFoundries 14nm FinFET process that addresses the needs of the new Artificial Intelligence (AI), Machine Learning and Deep Learning SoC applications. “Deep learning algorithms require many, sometimes thousands of small processors that all require access to the same data . AI memory architectures require high bandwidth data availability to various processing engines that are in proximity to each other. Furthermore, traditional caching replacement algorithms are not effective for these applications”, said William Palumbo COO of SDT. Our first generation NeuralRAM in the GF 14 nm LP process is a family of configurable memory macros that enable concurrent access of singular or sectors of data at extremely high speeds (>2Ghz) with minimal impact on dynamic power. NeuralRAM has the innate ability to interpret the sequence of Read/Write patterns and make appropriate adjustments to achieve extremely low dynamic power. With appropriate levels of read & write assist, Spectral designs can seamlessly support dynamic voltage scaling without stressing devices. Spectral Design will be exhibiting at the Global Foundries Technology Conference in Santa Clara on September 25th, 2018 and will be available to discuss their MemoryIP with the attendees of the Conference.”

For more information about Spectral’s Silicon Proven Memory IP, please reach out to us at sales@spectral-dt.com

Or check out our website at: www.spectral-dt.com

×
Semiconductor IP