Deep learning inference at the edge

by donpedro

The realization of deep learning inference (DL) at the edge requires a flexibly scalable solution that is power efficient and has low latency. At the edge mainly compact and passive cooled systems are used that make quick decisions without uploading data to the cloud. The new Mustang-V100 AI accelerator card from ICP Deutschland supports developers by integrating AI training models successfully at the edge. Eight Intel® Movidius™ Myriad™ X MA2485 Vision Processing Units (VPUs) are integrated on a PCIe based expansion card. Due to its low power consumption of 2.5W per VPU, it is suitable for particularly demanding low-power KI applications at the edge. Each individual VPU can be assigned a different DL topology. The reason for this is the multi-channel execution capability of the VPUs, which enables the simultaneous execution of calculations. This allows different applications such as object recognition or image and video classification to be executed simultaneously. In addition, the compatibility of the OpenVINO™ toolkit from Intel® optimizes the performance of the training model and scales it to the target system at the edge. Software developers benefit in two ways – through fast and optimized integration without tedious trial and error. The Mustang-V100 is compatible with a variety of popular operating systems such as Ubuntu 16.04, CentosOS 7.4 and Windows 10 IoT and supports numerous architectures and topologies of artificial neural networks.

ICP. Industrial Computer Products …by people who care!

ICP Deutschland |

Related Articles

Leave a Comment