Khronos Launches Dual Neural Network Standard Initiatives

Industry Call for Participation in new Neural Network Exchange Format working group; OpenVX standard for vision processing releases Neural Network extension

October 4th 2016 — San Francisco, CA The Khronos™ Group, an open consortium of leading hardware and software companies, todayannounced the creation of two standardization initiatives to address the growing industry interest in the deployment and acceleration of neural network technology. Firstly, Khronos has formed a new working group to create an API independent standard file format for exchanging deep learning data between training systems and inference engines. Work on generating requirements and detailed design proposals for the Neural Network Exchange Format (NNEF™) is already underway, and companies interested in participating are welcome to join Khronos for a voice and a vote in the development process. Secondly, the OpenVX™ working group has released an extension to enable Convolutional Neural Network topologies to be represented as OpenVX graphs and mixed with traditional vision functions.

Neural network technology has seen recent explosive progress in solving pattern matching tasks in computer vision such as object recognition, face identification, image search, and image to text, and is also playing a key part in enabling driver assistance and autonomous driving systems. Convolutional Neural Networks (CNN) are computationally intensive, and so many companies are actively developing mobile and embedded processor architectures to accelerate neural network-based inferencing at high speed and low power. As a result of such rapid progress, the market for embedded neural network processing is in danger of fragmenting, creating barriers for developers seeking to configure and accelerate inferencing engines across multiple platforms.

About the Neural Network Exchange Format (NNEF)
Today, most neural network toolkits and inference engines use proprietary formats to describe the trained network parameters, making it necessary to construct many proprietary importers and exporters to enable a trained network to be executed across multiple inference engines. The Khronos Neural Network Exchange Format (NNEF) is designed to simplify the process of using a tool to create a network, and running that trained network on other toolkits or inference engines. This can reduce deployment friction and encourage a richer mix of cross-platform deep learning tools, engines and applications.

The NNEF standard encapsulates neural network structure, data formats, commonly used operations (such as convolution, pooling, normalization, etc.) and formal network semantics. This enables the essentials of a trained network to be reliably exported and imported across tools and engines. NNEF is purely a data interchange format and deliberately does not prescribe how an exported network has been trained, or how an imported network is to be executed. This ensures that the data format does not hinder innovation and competition in this rapidly evolving domain. More information on the NNEF initiative is available at the NNEF Home Page.

About the OpenVX Neural Network Extension
The OpenVX Neural Network extension specifies an architecture for executing CNN-based inference in OpenVX graphs. The extension defines a multi-dimensional tensor object data structure which can be used to connect neural network layers, represented as OpenVX nodes, to create flexible CNN topologies. OpenVX neural network layer types include convolution, pooling, fully connected, normalization, soft-max and activation – with nine different activation functions. The extension enables neural network inferencing to be mixed with traditional vision processing operations in the same OpenVX graph.

Today, OpenVX has also released an Import/Export extension that complements the Neural Network extension by defining an API to import and export OpenVX objects, such as traditional computer vision nodes, data objects of a graph or partial graph, and CNN objects including network weights and biases or complete networks.

The high-level abstraction of OpenVX enables implementers to accelerate a dataflow graph of vision functions across a diverse array of hardware and software acceleration platforms. The inclusion of neural network inferencing functionality in OpenVX enables the same portable, processor-independent expression of functionality with significant freedom and flexibility in how that inferencing is actually accelerated. The OpenVX Neural Network extension is released in provisional form to enable developers and implementers to provide feedback before finalization and industry feedback is welcomed at the OpenVX Forums. More details on OpenVX and the new extensions can be found at the OpenVX Home Page.

Khronos is coordinating its neural network activities, and expects that NNEF files will be able to represent all aspects of an OpenVX neural network graph, and that OpenVX will enable import of network topologies via NNEF files through the Import/Export extension, once the NEFF format definition is complete.

Industry Support
“As an active working group member and one of the earliest OpenVX adopters, VeriSilicon is excited to see Khronos extend its support to deep learning and neural networks,” said Shanghung Lin, Vice President for Vision and Image Product Development at VeriSilicon. “Programmability and inter-operability between vision functions and the Neural Net extension makes OpenVX a perfect programming interface for VeriSilicon’s VIP8000 ultra-low-power, scalable vision processor solution, which combines neural network engines, OpenVX optimized shader programming engines, and a special interconnect logic called tensor processing fabric to allow collaborative computing for vision and neural net technology. VeriSilicon looks forward to participating in the Khronos NEFF working group to bridge the disparate market of deep learning frameworks and toolkits. A simple and standard neural net format is imperative to facilitate users choosing their favorite training tools and deploying the trained network to different inference engines in different applications.”

About The Khronos Group
The Khronos Group is an industry consortium creating open standards to enable the authoring and acceleration of parallel computing, graphics, vision and neural nets on a wide variety of platforms and devices. Khronos standards include Vulkan™, OpenGL®, OpenGL® ES, OpenGL® SC, WebGL™, OpenCL™, SPIR™, SPIR-V™, SYCL™, WebCL™, OpenVX™, EGL™, COLLADA™, and glTF™. Khronos members are enabled to contribute to the development of Khronos specifications, are empowered to vote at various stages before public deployment, and are able to accelerate the delivery of their cutting-edge accelerated platforms and applications through early access to specification drafts and conformance tests.

×
Semiconductor IP