All News | Boards | Chips | Devices | Software | Archive | About | Contact | Subscribe
Follow LinuxGizmos:
Twitter Facebook Pinterest RSS feed
*   get email updates   *

Myriad X based AI accelerator module has a built-in 12MP, 4K camera

May 28, 2020 — by Eric Brown — 561 views

Luxonis is crowdfunding an open source, $169 “megaAI” AI acceleration and camera module with an up to 4-TOPS Intel Myriad X VPU, a USB 3.0 port, and a 12-megapixel, 4K camera.

When Luxonis launched its Intel Movidius Myriad X DepthAI AI acceleration module last November, an external 4K camera was optional. Now, Luxonis has gone to Crowd Supply to successfully launch a $169 megaAI module, which is like a DepthAI with an integrated 12-megapixel, 4K camera.

Whereas DepthAI was available in USB add-on and Raspberry Pi HAT versions, as well as a carrier board that integrated a Raspberry Pi Compute Module 3B+, the megaAI is available only as a USB-equipped module. Like the DepthAI USB module, the megaAI can work with any Linux, Mac, or Windows computer, but is primarily marketed as an add-on to the Raspberry Pi.

megaAI with a quarter that looks like it has been chewed on by a dog — possibly the guilty looking canine shown at right
(click images to enlarge)

The megaAI is equipped with the up to 4-TOPS Myriad X VPU, and with the help of Intel’s OpenVINO tookit it supports object detection and object tracking and detection. Specific applications include automatic triggered 4K filming, as well as health and safety monitoring, such as identifying mask usage. You can also use it for manufacturing, agriculture, food processing inspection.

The megaAI is available for $169 with shipments due July 30 or $199 for an expedited model due June 1. There is also an Educator Edition with the same $149 price as the earlier DepthAI USB board, which is now sold for $115 on the Luxonis website, and a $795 five-pack discount. Only three days remain for the Crowd Supply campaign.

megaAI closeup view
(click image to enlarge)

The 12MP autofocus camera supports 4056 x 3040-pixel image capture and has an 81-degree Horizontal Field of View (HFOV). It offers 4K @ 30fps encoding using H.265, and it also supports H.264. Raw outputs specs when plugged into a 10Gbps USB 3.1 Gen2 host port are [email protected] or [email protected], says Luxonis.

The megaAI is equipped with a USB 3.1 Gen1 Type-C port with 5Gbps throughput, which is the same as the USB 3.0 Type-C available with the DepthAI. The 40 x 30mm, 2-ounce board consumes a maximum of 2.5W.

Earlier DepthAI CM3, HAT, and USB3 models (left to right)
(click image to enlarge)

As with DepthAI, Luxonis claims the MegaAI can offload far more processing from the Raspberry Pi than a Pi mated with Intel’s Myriad X based Intel Neural Compute Stick 2 (NCS2) USB stick accelerator. The board can achieve real-time object detection at up to 25.5 fps when plugged into a Raspberry Pi 3B+ as opposed to 8.31 fps with an NCS2 on a 3B+, both using Intel’s OpenVINO dev kit. The RPi 3B+ on its own tops out at 5.88fps, says Luxonis.

The megaAI hardware and software ship with an open source MIT license and is backward compatible with DepthAI applications. The module supports OpenCV and is available with a Python script that demos a MobileNetSSD based application. (For more details on megaAI’s Myriad X implementation, see our earlier DepthAI report.)

Further information

The megaAI module is available starting at $169 on Crowd Supply, with only three days left and shipments due June 1 or July 30, depending on the package. More information may be found on the Crowd Supply page and more should eventually appear on the Luxonis website.


(advertise here)

Print Friendly, PDF & Email

One response to “Myriad X based AI accelerator module has a built-in 12MP, 4K camera”

  1. Rogan Dawes says:

    Would be interesting to see if the current fad of virtual backgrounds could be offloaded to this module. i.e. detect the foreground object, erase the background, replace it with a green screen. Could help to reduce the CPU load on the host processor. Then it would be a simple compositing task, rather than running edge detection, etc. Not sure if having the green screen available would necessarily result in reduced CPU use on the host, unfortunately, since it would probably still be trying to do its own edge detection as well. Might be a simpler problem, at least.

Please comment here...