Flex Logix Announces Production Availability Of InferX X1 PCIe Boards for Edge AI Systems
Enables customers to quickly bring to market AI Inference products that leverage the industry's most efficient AI inference chip for edge systems
MOUNTAIN VIEW, Calif., Oct. 20, 2021 -- Flex Logix® Technologies, Inc., supplier of the most-efficient AI edge inference accelerator and the leading supplier of eFPGA IP, today announced the production availability of its InferX X1P1 PCIe accelerator board. Designed to bring high-performance AI inference acceleration to edge servers and industrial vision systems, the new InferX X1 PCIe board provides customers with superior AI inference capabilities where high accuracy, high throughput and low power on complex models is needed.
Leveraging a unique dynamic TPU array architecture, the InferX X1 is designed around low latency processing of Batch=1 workloads with a special focus on challenging Edge Vision applications. The InferX X1 offers leading edge performance, while remaining flexible to allow customers to seamlessly migrate to new AI models in the future and adapt to changing system requirements and protocols.
"The X1P1 has consistently demonstrated a superior value proposition for customers looking for efficient yet high-performance inference acceleration in edge applications," said Dana McCarty, Vice President of Sales and Marketing for Flex Logix's Inference Products. "Not only are we delivering on our promise to bring high-end AI capabilities to volume mainstream markets, but we are also allowing our customers to future proof their designs by enabling them to support evolving models, which is something many competitor products fail to provide."
About the InferX X1P1 Board
The InferX X1P1 board offers the most efficient AI inference acceleration for edge AI workloads such as Yolov3. Many customers need high-performance, low-power object detection and other high-resolution image processing capabilities for robotic vision, security, retail analytics and many other applications.
The InferX X1P1 board is available in production quantities starting in November 2021 and is priced starting at $399 for single unit quantities. Flex Logix also offers a software toolkit to support customer model porting to the X1P1 board.
About Flex Logix
Flex Logix is a reconfigurable computing company providing AI inference and eFPGA solutions based on software, systems and silicon. Its InferX X1 is the industry's most-efficient AI edge inference accelerator that will bring AI to the masses in high-volume applications by providing much higher inference throughput per dollar and per watt. Flex Logix's eFPGA platform enables chips to flexibly handle changing protocols, standards, algorithms, and customer needs and to implement reconfigurable accelerators that speed key workloads 30-100x compared to processors. Flex Logix is headquartered in Mountain View, California and also has offices in Austin, Texas. For more information, visit https://flex-logix.com.
Related Semiconductor IP
- NFC wireless interface supporting ISO14443 A and B with EEPROM on SMIC 180nm
- DDR5 MRDIMM PHY and Controller
- RVA23, Multi-cluster, Hypervisor and Android
- HBM4E PHY and controller
- LZ4/Snappy Data Compressor
Related News
- Flex Logix Launches InferX X1 Edge Inference Co-Processor That Delivers Near-Data Center Throughput at a Fraction of the Power and Cost
- Flex Logix Announces Availability and Roadmap of InferX X1 Boards and Software Tools
- Flex Logix Pairs its InferX X1 AI Inference Accelerator with the High-Bandwidth Winbond 4Gb LPDDR4X Chip to Set a New Benchmark in Edge AI Performance
- Flex Logix Announces Production Availability of InferX X1M Boards for Edge AI Vision Systems
Latest News
- CAST Releases First Dual LZ4 and Snappy Lossless Data Compression IP Core
- Arteris Wins “AI Engineering Innovation Award” at the 2025 AI Breakthrough Awards
- SEMI Forecasts 69% Growth in Advanced Chipmaking Capacity Through 2028 Due to AI
- eMemory’s NeoFuse OTP Qualifies on TSMC’s N3P Process, Enabling Secure Memory for Advanced AI and HPC Chips
- AIREV and Tenstorrent Unite to Launch Advanced Agentic AI Stack