Build your own Edge AI SoC with SiFive RISC-V CPUs and CEVA AI chips
Jan 9, 2020 — by Eric Brown 2,467 views
SiFive and CEVA announced that CEVA-BX audio DSPs, CEVA-XM vision chips, and up to 12.5-TOPS NeuPro AI processors will be added to SiFive’s DesignShare program, enabling customers to create custom “Edge AI SoCs” built around SiFive’s RISC-V CPUs.
CEVA has partnered with RISC-V chip designer and manufacturer SiFive to help bolster its DesignShare program with IP from several of its proprietary DSPs and NPUs. The two companies are collaborating to help customers design and manufacture customized, “ultra-low-power domain-specific” Edge AI SoCs that combine SiFive’s RISC-V CPUs and CEVA’s coprocessors. CEVA is also contributing its CDNN Deep Neural Network machine learning software compiler.


CEVA CDNN workflow (left) and NeuPro-S block diagram
(click images to enlarge)
Edge AI SoCs are especially suitable for “on-device neural networks inferencing supporting imaging, computer vision, speech recognition and sensor fusion.” Initial applications include smart home, automotive, robotics, security and surveillance, augmented reality, industrial, and IoT.
CEVA is contributing the following to the Edge AI SoC DesignShare program:
— ADVERTISEMENT —
- CEVA Deep Neural Network — The CDNN compiler features network optimizations, quantization algorithms, data flow management, and compute CNN and RNN libraries for creating “fully-optimized runtime software.” Designed for deploying cloud-trained AI models on edge devices for inference processing, CDNN offers optimizations for the CEVA-XM and NeuPro architectures.
- NeuPro-S — This low-power neural accelerator for edge inferencing is designed primarily for imaging and computer vision applications. The NeuPro architecture provides “unique 4096 native 8×8 MACS processing.” NeuPro-S processor options range from 2-TOPS to 12.5-TOPS, or up to 100-TOPS for multi-core configurations.
- CEVA-XM6 — The DSP-like XM6 is a deep learning-enabled computer vision processor optimized for low-power, real-time use cases. Applications include autonomous driving, sense-and-avoid drones, virtual and augmented reality, smart surveillance, smartphones, and robotics.
- CEVA-BX2 — This higher-end model in the CEVA-BX family is an audio/voice DSP for voice assistants, speech and natural language processing, object-based and 3D Audio processing, and audio analytics for neural networks. Features include quad 32X32-bit MACs and octal 16X16-bit MACs, with enhanced support for 16×8-bit and 8×8-bit MAC. There’s also a lower-end CEVA-BX1 DSP optimized for battery-powered devices such as Bluetooth earbuds.


Block diagrams for CEVA-XM6 (left) and CEVA-BX2
(click images to enlarge)
SiFive did not say which of its RISC-V CPUs are available for Edge AI SoCs, but the program is presumably focused on its 64-bit, Linux-driven models on par with Cortex-A cores. These include the original U54 found on the Cortex-A35 like Freedom U540 SoC that powers the HiFive Unleashed SBC and the faster, Cortex-A55 like U74. In October, SiFive announced a Cortex-A72 like U84 CPU. SiFive also offers a variety of 32-bit MCU-like designs such as the FE300 found in a recent educational-focused SiFive Learn Inventor board.
More on DesignShare
Demand is increasing for highly optimized and customized SoCs for specific use scenarios that require a growing number of coprocessors. It’s a challenge for chip designers, and especially for SoC design startups, to license and integrate all the necessary IP including CPUs, GPUs, DSPs, neural accelerators, security chips, and other co-processors.
The challenge is even greater if you want to use the open, royalty-free RISC-V architecture. There are plenty of CPU designs to choose from, but not much beyond that. On the other hand, the open source foundation for RISC-V and the flexibility of RISC-V business practices makes it uniquely suited as the anchor for increasingly heterogeneous SoCs.
Last year, SiFive launched a DesignShare program in which customers can tap into the proprietary IP available from participating chipmaking customers of SiFive’s RISC-V CPUs. The idea is that SiFive helps quickly negotiate the licensing of various coprocessor IP without requiring upfront payments. SiFive also handles NREs and royalty collections and can assist customers in integrating components into custom designs built around RISC-V CPUs.
IP vendors are protected from theft because “IP only goes out as finished chips,” says SiFive. “No GDSII or RTL leaves SiFive servers.”
In July, SiFive announced it had signed up 20 companies for DesignShare. The participants have provided GPUs and accelerators, and more recently, cryptographic solutions, in-chip monitoring, memory compilers, interconnects and controllers, clock management, and SerDes. High profile members include Imagination Technologies, which is offering access to its PowerVR GPUs and neural network accelerator (NNA).
NXP is participating in another interesting approach to developing heterogeneous RISC-V based SoCs. Last month, the OpenHW Group unveiled a Linux-driven CORE-V Chassis eval SoC due for tape-out in 2H 2020 based on an NXP i.MX SoC. The difference is that it runs on a RISC-V and PULP-based 64-bit, 1.5GHz CV64A CPU and 32-bit CV32E cores.
Further information
SiFive’s DesignShare program for Edge AI SoCs with CEVA-BX audio DSPs, CEVA-XM Vision DSPs, and NeuPro AI processors is available now. More information on the Edge AI SoC DesignShare program may be found in SiFive’s announcement.
Please comment here...