EdgeCortix 动态神经加速器 AI 处理器技术荣获多项专利

TOKYO -- Oct. 5, 2021 -- EdgeCortix Inc. (Tokyo, Japan; CEO: Sakyasingha Dasgupta), the company that enables efficient AI processing at the edge with near cloud-level performance, today announced that it has been granted four patents on their artificial intelligence specific, runtime reconfigurable processor technology.

"The four patents acquired in Japan and USA are fundamental technologies behind our Dynamic Neural Accelerator (DNA) hardware architecture, a highly energy-efficient and low-latency AI accelerator IP, designed specifically for on-device machine learning. DNA in combination with a proprietary software stack is the technology behind EdgeCortix's first co-processor chip for AI inference. These patents further strengthen the differentiation of our DNA technology against its competitors and are important additions to our existing portfolio of patents across hardware processor and compiler technologies," commented Sakyasingha Dasgupta, CEO of EdgeCortix group companies and one of the inventors of the acquired patents.

JP Patent No. 6834097
Patent Issue Date:
8th February 2021
Title of Invention: Neural Network Accelerator Hardware Specific Division of Inference

US Patent App. No. 17/186,003 (granted)
Date of Notification: 18th August 2021
Title of Invention: Neural Network Accelerator Hardware Specific Division of Inference into Groups of Layers

Assignee: EdgeCortix Pte. Ltd.

Invention Summary: This invention covers the generation of instructions to perform inference by a hardware system, such as an ASIC or an FPGA, capable of performing efficient neural network inference by grouping neural network layers and avoiding external memory accesses between processing them to reduce the total number of external memory accesses as compared to processing the layers one by one and storing all intermediate data in an external memory. This may allow flexibility in handling various neural networks with performance and power efficiency close to a fixed-neural-network chip, and flexibility to handle a variety of neural networks, such as convolutional neural networks, including MobileNet variations. Such techniques may be beneficial in conditions when an entire input layer cannot fit into limited on-chip memory. By reducing external memory accesses, stochasticity in performance may be reduced as well.

US Patent No. 11144822
Patent Issue Date:
12th October 2021
Title of Invention: Neural Network Accelerator Run-time Reconfigurability

JP Patent App. No. 2021-079197 (granted)
Date of Notification: 8th September 2021
Title of Invention: Neural Network Accelerator Run-time Reconfigurability
Assignee: EdgeCortix Pte. Ltd.

Invention Summary: This invention covers devices for performing neural network inference, such as an accelerator, that include a novel "reduction interconnect", between its compute modules and its on-chip memory for accumulating compute module outputs on-the-fly, avoiding the extra read from and write to on-chip memory. The reduction interconnect is reconfigurable to establish connections between the compute modules, bypassing on-chip memory, in a manner that results in efficient run-time inference of entire tasks or portions of such tasks. For example, in an accelerator for inference of any deep neural network, the reduction interconnect may allow, for every compute module, selecting between direct access to memory or access through an auxiliary adder circuit. The freedom to select the connectivity may allow an accelerator to compute multiple input channel tiles or kernel pixels in parallel, with multiple compute modules working fully synchronously.

EdgeCortix group companies has an extensive portfolio of patents or patent applications covering all key products and creates shareholder value by giving EdgeCortix both the freedom to operate and significant product differentiation.

About EdgeCortix

EdgeCortix, founded in 2019, is a leading provider of artificial intelligence hardware acceleration solutions, specially designed for edge computing scenarios. The Company's revolutionary new Dynamic Neural Accelerator (DNA) architecture is a reconfigurable, scalable, and power-efficient AI processor design. DNA, combined with the company's proprietary software, enables easy deployment of neural network models with high energy-efficiency and low-latency on custom ASICs or FPGAs. The Company provides software, AI processor hardware and IP that address the high-performance and low-latency requirements in advanced driver assistive systems, autonomous robots, financial technology, manufacturing, smart city and other advanced vision systems.

×
Semiconductor IP