Vision AI Accelerator IP

Filter
Filter
Compare 5 IP from 4 vendors (1 - 5)
  • NPU IP for Embedded AI
    • Fully programmable to efficiently execute Neural Networks, feature extraction, signal processing, audio and control code
    • Scalable performance by design to meet wide range of use cases with MAC configurations with up to 64 int8 (native 128 of 4x8) MACs per cycle
    • Future proof architecture that supports the most advanced ML data types and operators
    Block Diagram -- NPU IP for Embedded AI
  • AI accelerator (NPU) IP - 32 to 128 TOPS
    • Performance efficient 18 TOPS/Watt
    • 36K-56K MACS
    • Multi-job support
    Block Diagram -- AI accelerator (NPU) IP - 32 to 128 TOPS
  • AI accelerator (NPU) IP - 16 to 32 TOPS
    • Performance efficient 18 TOPS/Watt
    • Scalable performance from 18K MACS
    • Capable of processing HD images on chip
    Block Diagram -- AI accelerator (NPU) IP - 16 to 32 TOPS
  • Edge AI Accelerator NNE 1.0
    • Minimum efforts in system integration
    • Speed up AI inference performance
    • Super performance for power sensitive application
    Block Diagram -- Edge AI Accelerator NNE 1.0
  • Tensilica AI Max - NNA 110 Single Core
    • Scalable Design to Adapt to Various AI Workloads
    • Efficient in Mapping State-of-the-Art DL/AI Workloads
    • End-to-End Software Toolchain for All Markets and Large Number of Frameworks
    Block Diagram -- Tensilica AI Max - NNA 110 Single Core
×
Semiconductor IP