Refresh

This website linuxgizmos.com/nvidia-egx-edge-ai-stack-debuts-on-four-new-jetson-and-tesla-based-adlink-systems/ is currently offline. Cloudflare's Always Online™ shows a snapshot of this web page from the Internet Archive's Wayback Machine. To check for the live version, click Refresh.


All News | Boards | Chips | Devices | Software | LinuxDevices.com Archive | About | Contact | Subscribe
Follow LinuxGizmos:
Twitter Facebook Pinterest RSS feed
*   get email updates   *

Nvidia EGX edge-AI stack debuts on four new Jetson and Tesla-based Adlink systems

May 29, 2019 — by Eric Brown 4,310 views

Nvidia’s “Nvidia EGX” solution for AI edge computing combines its Nvidia Edge Stack and Red Hat’s Kubernetes-based OpenShift platform running on Linux-driven Jetson modules and Tesla boards. Adlink unveiled four edge servers based on EGX using the Nano, TX2, Xavier, and Tesla.

Announced at this week’s Computex show in Taiwan, Nvidia EGX is billed as an “On-Prem AI Cloud-in-a-Box” that can run cloud-native container software on edge servers. The platform also lets you run EGX-developed edge server applications in the cloud.

Nvidia EGX is built on the Nvidia Edge Stack equipped with AI-enabled CUDA libraries, running Nvidia’s Arm-based, Linux-driven Jetson Nano, Jetson TX1/TX2, and Jetson Xavier modules, as well as its high-end Tesla modules up to a TX4 server. The key new ingredient is the Kubernetes cloud container platform, enabled here with Red Hat’s OpenShift container orchestration stack.

— ADVERTISEMENT —


One of the early EGX adopters is Adlink, which announced four embedded edge server gateways with the software (see farther below).



Nvidia EGX architecture and Jetson Nano
(click images to enlarge)

Nvidia’s Nvidia EGX platform joins a wave of AI-enabled edge solutions ranging from Google’s Edge TPU devices to the Linux Foundation’s LF Edge initiative. Like these and other edge platforms, EGX is designed not only to orchestrate data flow between device, gateway, and cloud, but to reduce that growing traffic by running cloud-derived AI stacks directly on edge devices for low latency response times.

Nvidia EGX supports remote IoT management via AWS IoT Greengrass and Microsoft Azure IoT Edge. The platform can be also deployed with pre-certified security, networking and storage technologies from Mellanox and Cisco.

While it’s already possible to combine Nvidia Jetson devices running Nvidia Edge Stack with Kubernetes orchestrated edge containers, EGX streamlines the process. The EGX stack is said to handle edge server OS installation (i.e. Linux), Kubernetes deployment, and device provisioning and updating behind the scenes as part of a hardened turnkey system.

The core Nvidia Edge Stack integrates Nvidia drivers and CUDA technologies including a Kubernetes plugin, Docker container runtime, and CUDA-X libraries. It also incorporates containerized AI frameworks and applications, including TensorRT, TensorRT Inference Server, and DeepStream. Optimized for certified servers, Nvidia Edge Stack can be downloaded from the Nvidia NGC registry.

Enterprise EGX servers are available from ATOS, Cisco, Dell EMC, Fujitsu, Hewlett Packard Enterprise, Inspur and Lenovo, says Nvidia. EGX-ready devices are also available from server and IoT system makers including Abaco, Acer, Adlink, Advantech, ASRock Rack, Asus, AverMedia, Cloudian, Connect Tech, Curtiss-Wright, Gigabyte, Leetop, MiiVii, Musashi Seimitsu, QCT, Sugon, Supermicro, Tyan, WiBase and Wiwynn. Nvidia claims 40+ early adopters, with testimonial quotes from Foxconn, GE Healthcare, and Seagate.

AI at the edge “is enabling organizations to use the massive amounts of data collected on sensors and devices to create smart manufacturing, healthcare, aerospace and defense, transportation, telecoms, and cities to provide engaging customer experiences,” says Adlink in its announcement of the EGX-enabled edge systems covered below. Both Nvidia and Adlink mention benefits like faster product safety inspections, on-the-fly traffic monitoring, and more timely and accurate interpretations of medical scans.

The technologies are also targeted at surveillance, facial recognition, and customer behavior analysis. While AI-enabled surveillance applications can improve security, there are growing concerns about mis-use, with the city of San Francisco recently banning police from using facial recognition. Corporations can use the technology to track individuals in public spaces and sell the data and analysis. Police departments and government security agencies around the world are using AI tech to track and control dissidents.

 
Adlink’s four new EGX systems

Adlink was one of the first embedded vendors to announce new systems built around EGX running on Jetson and Tesla hardware. Like Nvidia, Adlink never mentions Linux in the announcement or product pages, but all these modules are designed to run Linux.



Adlink M100-Nano-AINVR (left) and M300-Xavier-ROS2
(click images to enlarge)

The Adlink roll-out starts on the low end with a Jetson Nano based M100-Nano-AINVR edge server for surveillance and a Jetson TX2 based DLAP-201-JT2 system for object detection. The higher-end M300-Xavier-ROS2 uses the Jetson Xavier to drive an autonomous robot controller, and the ALPS-4800 Edge Server with Tesla incorporates Nvidia’s powerful Tesla graphics cards to create an AI training platform.

 
M100-Nano-AINVR

The M100-Nano-AINVR is a compact networked video recording (NVR) platform used to “identity detection and autonomous tracking in public transport and access control,” says Adlink. Built around Nvidia’s latest and lowest powered Jetson module, Jetson Nano, the M100-Nano-AINVR incorporates 8x Power-over-Ethernet enabled Gigabit Ethernet ports for IP cameras, as well as 2x standard GbE ports.



M100-Nano-AINVR, front and back
(click image to enlarge)

The Jetson Nano, which features 4x Cortex-A57 cores and a relatively modest 128-core Maxwell GPU with CUDA support, runs the Nvidia EGX stack with the help of 4GB LPDDR4 and 16GB eMMC. Adlink adds the GbE ports, as well as 4x USB 3.0 ports, a micro-USB 2.0 OTG, and a 2.5-inch SATA SSD slot. Other features include an HDMI 2.0 port, 2x RS-232/485 ports, and 8-bit DIO.

The wall- and DIN-rail mountable system measures 210 x 170 x 55mm and supports 0 to 50°C temperatures. There’s a 12V DC input and optional 160W AD/DC adapter, as well as power and reset buttons.

 
DLAP-201-JT2

The ultra-compact DLAP-201-JT2 is designed as an edge inference platform for accelerating deep learning workloads for object detection, recognition, and classification, says Adlink. Examples include real-time traffic management optimization, improved smart bus routing, more timely security surveillance analysis, and other “smart city and smart manufacturing applications.”



DLAP-201-JT2, front and back
(click images to enlarge)

The DLAP-201-JT2 moves up to the more powerful Jetson TX2 module with dual high-end “Denver” cores and 4x Cortex-A57 cores, as well as more powerful, 256-core Pascal graphics and 8GB LPDDR4. The Adlink system also supports the earlier Jetson TX1, which has the same CPU power as the Nano, but provides 256-core Maxwell graphics, placing it in between the Nano and TX2. The TX2 supplies 8GB LPDDR4 while the TX1 has 4GB, and both include 16GB eMMC.

The DLAP-201-JT2 is even smaller than the Nano-based M100-Nano-AINVR, measuring 148 x 105 x 50mm. It has IP40 protection and a wider -20 to 70°C range to meet industrial and outdoors applications. Wall and DIN-rail mounts are available.

The system is equipped with 2x GbE, 2x USB 3.0, and single HDMI 2.0, serial COM, and CANBus ports. There’s also 4-channel DIO, a debug console port, and optional audio jacks. For storage, you get an SD slot (possibly micro) and mSATA. There’s also a separate mini-PCIe slot with a micro-SIM slot. The TX2 module supplies a wireless module with 802.11ac and Bluetooth 4.0. Four SMA antenna holes are available.

There’s a 12V DC input with optional 40W adapter and power and recovery buttons. There’s also a CMOS battery holder with reverse charge protection.

 
M300-Xavier-ROS2

The M300-Xavier-ROS2 is an embedded robotic controller that runs on the Jetson AGX Xavier. The ROS2-enabled system provides for autonomous navigation in automated mobile robots.



M300-Xavier-ROS2 expansion cassette (left), M300 without cassette (middle) and with cassette
(click image to enlarge)

Nvidia’s Xavier module features 8x ARMv8.2 cores and a high-end, 512-core Nvidia Volta GPU with 64 tensor cores with 2x Nvidia Deep Learning Accelerator (DLA) engines. The module is equipped with a 7-way VLIW vision chip, as well as 16GB 256-bit LPDDR4 RAM and 32GB eMMC 5.1.

The fanless M300-Xavier-ROS2 measures 190 x 210 x 80mm, but with the expansion cassette expands to 322 x 210 x 80mm. The 0 to 50°C tolerant system has a wide-range 9-36V DC input and optional 280W AC adapter with recovery and reset buttons.

The system is equipped with 2x GbE, 6x USB 3.1 Gen1, and a single USB 3.1 Gen2 port. You also get 3x RS-232 ports and single RS-232/485 and HDMI ports. For storage, there’s a microSD slot and an M.2 Key B+M (3042/2280) slot. An optional expansion cassette adds PCIe x8 and PCIe x4 slots.

Other features include 20-bit GPIO and UART, SPI, CAN, I2C, PWM, ADC, and DAC interfaces. Although not listed in the spec list proper, the bullet points mention mini PCIe and M.2 E key 2230 expansion, as well as a MIPI-CSI camera connection.

 
ALPS-4800 (Edge Server with Tesla)

The ALPS-4800 is a server-like, carrier-grade AI training platform in a 4U1N rackmount form factor. It features dual Intel Xeon Scalable processors and 8x PCIe x16 Gen3 GPU slots. The system is validated to run Nvidia EGX code on Nvidia Tesla P100 and V100 Tesla GPU accelerators, the lower-end cousins to the high-end T4.



ALPS-4800 (Edge Server with Tesla)
(click image to enlarge)

The ALPS-4800 supports both single and dual root complexes for various AI applications, says Adlink. For deep learning, a single root complex can utilize all the GPU clusters to focus on large-size data training jobs while the CPUs handle smaller tasks. For machine learning, a dual root complex can allocate more tasks to the CPUs and arrange fewer distributed data training jobs among GPUs.

The system supports up to 3TB 2666MHz DDR4 and offers 8x 2.5-inch SATA drives. In addition to the 8x PCIe slots dedicated to the Tesla cards, there are 4x PCIe Gen 3 slots and a storage mezzanine.

Other features include 2x 10GbE SFP+ NIC ports, a dedicated GbE BMC port, and OCP 2.0 slots for up to 100GbE ports. You also get a BMC-dedicated GbE port, 4x USB ports, a VGA port, and a powerful 1600W power supply.

 
Further information

Nvidia EGX appears to be available today. More information may be found in Nvidia’s EGX announcement and product page.

No pricing or availability information was provided for Adlink’s four “preliminary” EGX-enabled edge servers. More information may be found in Adlink’s M100-Nano-AINVR, DLAP-201-JT2, M300-Xavier-ROS2, and ALPS-4800 product pages.

 

(advertise here)


Print Friendly, PDF & Email
PLEASE COMMENT BELOW

Please comment here...