Reviewing different Neural Network Models for Multi-Agent games on Arm using Unity
During the Game Developer Conference (GDC) in March 2023, we showcased our multi-agent demo called Candy Clash, a mobile game containing 100 intelligent agents. In the demo, the agents are developed using Unity’s ML-Agents Toolkit which allows us to train them using reinforcement learning (RL). To find out more about the demo and its development, see our previous blog series. Previously, the agents had a simple Multi-Layer Perceptron (MLP) Neural Network (NN) model. This blog explores the impact of using other types of neural networks models on the gaming experience and performance.
To read the full article, click here
Related Semiconductor IP
- HBM4 PHY IP
- Ultra-Low-Power LPDDR3/LPDDR2/DDR3L Combo Subsystem
- MIPI D-PHY and FPD-Link (LVDS) Combinational Transmitter for TSMC 22nm ULP
- HBM4 Controller IP
- IPSEC AES-256-GCM (Standalone IPsec)
Related Blogs
- Benefit of pruning and clustering a neural network for before deploying on Arm Ethos-U NPU
- Neural Network Model quantization on mobile
- Easing software development for high-performance zonal controller based on Arm Cortex-R82AE
- Cadence Extends Support for Automotive Solutions on Arm Zena Compute Subsystems
Latest Blogs
- ReRAM in Automotive SoCs: When Every Nanosecond Counts
- AndeSentry – Andes’ Security Platform
- Formally verifying AVX2 rejection sampling for ML-KEM
- Integrating PQC into StrongSwan: ML-KEM integration for IPsec/IKEv2
- Breaking the Bandwidth Barrier: Enabling Celestial AI’s Photonic Fabric™ with Custom ESD IP on TSMC’s 5nm Platform