Machine Learning Processor IP

Filter
Filter

Login required.

Sign in

Login required.

Sign in

Login required.

Sign in

Compare 36 IP from 16 vendors (1 - 10)
  • Machine Learning Processor
    • Extending Performance and Efficiency
    • Flexible Integration
    • Unified Software and Tools
  • Machine Learning Processor
    • Partner Configurable
    • Extremely Small Area
    • Single Toolchain
  • Machine Learning Processor
    • Outstanding Performance
    • Highly Efficient
    • Optimized Desig
  • All-analog Neural Signal Processor
    • Analog AI Innovation: Blumind AMPL™ is a disruptive analog AI compute fabric for micropower artificial intelligence applications.
    • Precision and Accuracy: Blumind all-analog AI compute delivers deterministic and precise inferencing performance at up to x1000 lower power than our competitors. Delivering higher efficiency and the longest battery life for always-on applications.
    • Low Latency Solutions: AMPL™ fabric delivers efficient low latency for real-time applications.
    • Analog Breakthrough: AMPL™ is the first all-analog AI on advanced standard CMOS architected to fundamentally mitigate process, voltage, temperature and drift variations.
  • High Performance RISC-V Processor for Edge Computing
    • Superscalar / Out-of-order Execution / 3-issue / 8-stage Pipeline
    • High level of configurablity and design scalability
  • Image Signal Processor IP - High performance image signal processing for auto and industrial markets
    • 32bit DVP interface, 24bit ISP pipeline
    • Dual pixel per cycle throughput
    • Wide Dynamic Range Tone Mapping (WDR)
    • Multi-exposure HDR (Native/build in HDR, Compand output, DOL/Stagger, Stagger output)
  • Arm Cortex-M55 Processor
    • Improve ML and DSP Performance
    • Accelerate Time to Market
    • Simplify Software Development
  • ARC EV Processors are fully programmable and configurable IP cores that are optimized for embedded vision applications
    • ARC processor cores are optimized to deliver the best performance/power/area (PPA) efficiency in the industry for embedded SoCs. Designed from the start for power-sensitive embedded applications, ARC processors implement a Harvard architecture for higher performance through simultaneous instruction and data memory access, and a high-speed scalar pipeline for maximum power efficiency. The 32-bit RISC engine offers a mixed 16-bit/32-bit instruction set for greater code density in embedded systems.
    • ARC's high degree of configurability and instruction set architecture (ISA) extensibility contribute to its best-in-class PPA efficiency. Designers have the ability to add or omit hardware features to optimize the core's PPA for their target application - no wasted gates. ARC users also have the ability to add their own custom instructions and hardware accelerators to the core, as well as tightly couple memory and peripherals, enabling dramatic improvements in performance and power-efficiency at both the processor and system levels.
    • Complete and proven commercial and open source tool chains, optimized for ARC processors, give SoC designers the development environment they need to efficiently develop ARC-based systems that meet all of their PPA targets.
  • IP platform for intelligence gathering chips at the Edge
    • High performance IoT solutions for AI at the Edge can now be created up to 30% faster
    Block Diagram -- IP platform for intelligence gathering chips at the Edge
  • Highly scalable performance for classic and generative on-device and edge AI solutions
    • Flexible System Integration: The Neo NPUs can be integrated with any host processor to offload the AI portions of the application
    • Scalable Design and Configurability: The Neo NPUs support up to 80 TOPS with a single-core and are architected to enable multi-core solutions of 100s of TOPS
    • Efficient in Mapping State-of-the-Art AI/ML Workloads: Best-in-class performance for inferences per second with low latency and high throughput, optimized for achieving high performance within a low-energy profile for classic and generative AI
    • Industry-Leading Performance and Power Efficiency: High Inferences per second per area (IPS/mm2 and per power (IPS/W)
    Block Diagram -- Highly scalable performance for classic and generative on-device and edge AI solutions
×
Semiconductor IP