ONNC (Open Neural Network Compiler) is a compilation framework designed specifically for proprietary deep learning accelerators. Its software architecture expedites porting ONNC to any DLA design that supports ONNX (Open Neural Network Exchange) operators. The NVIDIA Deep Learning Accelerator (NVDLA) is a free and open architecture that provides a scalable, configurable and modular design to address the computational demands of convolutional neural network inference and many proprietary SoC designs integrate NVDLA as their inference engines. Lack of extensible compiler support for NVDLA becomes the major bottleneck for supporting more AI models and optimizations. When ONNC meets NVDLA, it opens up opportunities for developers and researchers to explore the system design space in NVDLA-based SoC. It also enables hardware customization and performance optimization. This talk will present how these two open source projects complement each other along the way.
Although NVDLA has an open source software stack with a virtual platform, a pre-built Linux kernel, a user-mode driver (UMD), and a kernel-mode driver (KMD) for developers to explore the full- system design, NVIDIA only released its compiler (nvdla compiler) in the binary form. Lack of an open source compiler and support for various hardware configurations in the software stack has become an inevitable barrier for developers and researchers to improve and optimize an NVDLA-based design. ONNC is the first open source compiler available for NVDLA- based hardware designs. Its NVDLA backend can compile a model into an executable NVDLA Loadable file. It lifts many restrictions of software development for those who like to leverage the NVDLA design in inference applications. It also facilitates the software/hardware co-design process and early-stage software development for NVDLA-based designs. This talk will describe how the sparks fly when ONNC meets NVDLA in more details.