Improving performance and security in IoT wearables
By Pritesh Mandaliya, Cypress Semiconductor
Many IoT applications – including connected cars, factory automation, smart city, connected health, and wearables – require nonvolatile memory to store data and code. Traditionally, embedded applications have used external Flash memory for this purpose.
However, as modern semiconductor technology faces challenges in scaling and cost as it moves to smaller geometries, it has become increasingly difficult to embed Flash memory within the host SoC. Therefore, future MCU or SoC designs are targeting system-in-package (SiP) or the use of external Flash. This trend does not address the needs of IoT applications like wearables because of their small form factor, strict cost constraints, and low-power related requirements.
To address these issues, Flash memory manufacturers are developing architectures that optimize size and power consumption. At the same time, they are introducing important new capabilities that support greater endurance, reliability, security, and safety.
To read the full article, click here
Related Semiconductor IP
- Process/Voltage/Temperature Sensor with Self-calibration (Supply voltage 1.2V) - TSMC 3nm N3P
- USB 20Gbps Device Controller
- SM4 Cipher Engine
- Ultra-High-Speed Time-Interleaved 7-bit 64GSPS ADC on 3nm
- Fault Tolerant DDR2/DDR3/DDR4 Memory controller
Related White Papers
- How to achieve better IoT security in Wi-Fi modules
- IoT Security: Exploring Risks and Countermeasures Across Industries
- The Growing Imperative Of Hardware Security Assurance In IP And SoC Design
- Achieving Lower Power, Better Performance, And Optimized Wire Length In Advanced SoC Designs
Latest White Papers
- Fault Injection in On-Chip Interconnects: A Comparative Study of Wishbone, AXI-Lite, and AXI
- eFPGA – Hidden Engine of Tomorrow’s High-Frequency Trading Systems
- aTENNuate: Optimized Real-time Speech Enhancement with Deep SSMs on RawAudio
- Combating the Memory Walls: Optimization Pathways for Long-Context Agentic LLM Inference
- Hardware Acceleration of Kolmogorov-Arnold Network (KAN) in Large-Scale Systems