Automotive MCU IP
Filter
Compare
14
IP
from 7 vendors
(1
-
10)
-
8-bit MCU
- FAST architecture, 4 times faster than the original implementation
- Software compatible with industry standard 68HC11
- 10 times faster multiplication
- 16 times faster division
-
8-bit Microcontroller IP - legacy architecture - raplacement of 68HC11K MCU's
- Cycle compatible with original implementation
- Software compatible with 68HC11K industry standard
- I/O Wrapper, making it pin-compatible core
- SFR registers remapped to any 4KB memory page
-
Ultra-low power 32 kHz RC oscillator - High temperature (Grade 1, Tj=150°)
- Ultra-low power for best-in-class power consumption of the always-on domain during sleep / deep sleep modes
- Fast wake-up
- Active, shutdown and stand-by modes
-
2D/3D Vector Graphics Accelerator / GPU (Graphics Processing Unit)
- D/AVE HD is an evolution in the D/AVE family supporting high quality 2D rendering and basic 3D rendering for displays up to 4K x 4K. With its high customizability D/AVE HD targets modern graphics applications in the Industrial, Medical, Military, Avionics, Automotive and Consumer markets. D/AVE HD is designed to be fast with powerful functionality and at the same time optimized regarding size and footprint. Its footprint optimized variants are especially suitable for low-power wearable products.
-
3D OpenGL ES GPU (Graphics Processing Unit)
- Scalability throughout the entire design
- Unified Shader Architecture
- Massively parallel execution with fine grained Multithreading
- Bandwidth reduction by e.g. on the fly data compression/decompression
-
Highly scalable performance for classic and generative on-device and edge AI solutions
- Flexible System Integration: The Neo NPUs can be integrated with any host processor to offload the AI portions of the application
- Scalable Design and Configurability: The Neo NPUs support up to 80 TOPS with a single-core and are architected to enable multi-core solutions of 100s of TOPS
- Efficient in Mapping State-of-the-Art AI/ML Workloads: Best-in-class performance for inferences per second with low latency and high throughput, optimized for achieving high performance within a low-energy profile for classic and generative AI
- Industry-Leading Performance and Power Efficiency: High Inferences per second per area (IPS/mm2 and per power (IPS/W)
-
Telematics Processors IP
- Core and infrastructure
- ? ARM® Cortex™-R4 MCU
- ? Embedded SRAM
- ? SDRAM controller
-
8-bit FAST Microcontroller
- FAST architecture, 4 times faster than the original implementation
- Software compatible with industry standard 68HC11
- 10 times faster multiplication
- 16 times faster division
-
8-bit FAST Microcontroller
- FAST architecture, 4 times faster than the original implementation
- Software compatible with 68HC11 industry standard
- 10 times faster multiplication
- 16 times faster division