BrainChip Unveils MetaTF 2.13 on newly launched Developer Hub

BrainChip is making it easier than ever for developers to build intelligent, low-power applications at the edge with the launch of its all-new Developer Hub—a dedicated portal designed to accelerate innovation on the Akida™ platform. 

Introducing the BrainChip Developer Hub 

The Developer Hub is a centralized resource offering tools, pre-built model libraries, and support for developers building AI applications on Akida, BrainChip’s neuromorphic processor IP. Central to the hub is MetaTF, a toolkit that enables seamless conversion, quantization, and deployment of machine learning models. Compatible with popular frameworks like Python and Jupyter Notebooks, MetaTF empowers developers to quickly prototype, optimize, and deploy edge-ready AI models. 

Newly Released Models: Eye-Tracking and Gesture Recognition 

BrainChip has released two high-performance models on the MetaTF platform: eye-tracking and gesture recognition. Both are designed to demonstrate the unique advantages of event-based AI processing using Akida, which delivers real-time performance with ultra-low power consumption. 

Eye-Tracking for Smart Glasses and Wearables 

In a space where efficiency and privacy are critical, BrainChip’s eye-tracking model stands out. Based on TENNs (Temporal Event-based Neural Networks), the model offers: 

  • Over 99% accuracy (p10), <2 pixel mean error 
  • Just 219,000 parameters 
  • >70% sparsity, ensuring efficient real-time edge inference 

Smart glasses powered by this model benefit from privacy-preserving, low-power vision processing. Akida’s ability to process only relevant motion-based input—rather than full video—dramatically reduces compute needs and power draw, making all-day wearables possible. 

Its flexibility supports both event-based and traditional RGB workflows, allowing developers to adapt to a variety of sensor configurations. 

Gesture Recognition with State-of-the-Art Results 

BrainChip’s gesture recognition model, also based on TENNs, achieves state-of-the-art latency and 97% accuracy on the IBM DVS128 dataset, using just 165,000 model parameters. This makes it an ideal solution for cost-sensitive, embedded applications in consumer electronics, robotics, and smart home devices. 

What sets this solution apart is the combination of: 

  • Akida’s event-based neuromorphic processing and 
  • Event-based vision sensors which capture high-speed motion with high sparsity 

This allows the system to process only the information relevant to a gesture—significantly reducing latency and power consumption—while maintaining exceptional precision. 

“Faster gesture recognition is a game-changer and the opportunity to showcase the advancements that are possible through the combination of our neuromorphic Akida processor and Prophesee’s event-based vision camera,” said Dr. Lewis. “The ability to detect and track with unprecedented efficiency, precision and low power at the edge solves a key challenge for AI developers and opens up an array of possibilities across a wide spectrum of applications.” 

These computer vision capabilities unlock new use cases in: 

  • Autonomous vehicles 
  • Industrial automation 
  • IoT and smart environments 
  • Security and surveillance 
  • AR/VR and spatial computing 

TENN’s ability to handle both event streams and image sequences makes it highly adaptable for edge vision workloads. 

Accelerate Your AI at the Edge with BrainChip 

With the Akida platform and the new Developer Hub, BrainChip gives developers a head start in building responsive, power-efficient, and privacy-focused AI applications. Whether you’re enabling gaze control in smart glasses or enabling intuitive gesture interfaces in robotics, these new models demonstrate what’s possible with event-based AI processing at the edge. 

Ready to build smarter, faster, and more efficient AI applications?
Visit the Developer Hub to get started or contact us to learn more about integrating Akida into your products. 

×
Semiconductor IP