With the Mustang-V100 series, ICP Deutschland offers a flexibly scalable solution for implementing Deep Learning Inference (DL) at the Edge, which is energy-saving and has a low latency time. The Edge primarily uses systems that are designed to make quick decisions without uploading to the cloud. With the Mustang V100-MX4 PCIe KI accelerator card, ICP Deutschland is expanding its portfolio with a variant equipped with four Intel® Movidius™ Myriad™ X MA2485 Vision Processing Units (VPUs). The PCI Express bus-based card can be integrated into a variety of embedded systems.
Native FP16 support, fast porting and deployment of neural networks in coffee or tensorflow format and low power consumption are key features of the MyriadTM X VPU. Multi-channel capability allows each VPU to be assigned a different DL topology for simultaneous computing. AlexNet, GoogleNet, Yolo Tiny, SSD300, ResNet, SqueezeNet or MobileNet are just a few of the supported topologies. Compatibility with the Open Visual Inference Neural Network Optimization (OpenVINO™) Toolkit from Intel® optimizes the performance of the training model and scales it to the target system at the edge. This results in an optimized integration without tedious trail and error. The Mustang-V100 series is compatible with a variety of popular operating systems such as Ubuntu 16.04, CentosOS 7.4 and Windows 10 IoT. Due to the low power consumption of 2.5 Watt per VPU, respectively 15 Watt total consumption of the Mustang-V100-MX4, it is especially suitable for low-power KI applications. Besides the Mustang-V100-MX4 with 4 VPU units, ICP Germany offers a PCIe variant with 8 VPU units as well as variants based on the Mini-PCIe and M.2 bus.
Specifications
- AI accelerator card with Intel® Movidius™ Myriad™ Myriad™ X MA2485 VPU
- Single Slot PCIe x2 Interface
- Operating temperature: 5°C~55°C
- Low power consumption: <15W TDP
- Actively cooled
- Support of ANN topologies
Applications
- Multi-channel version
- Acceleration of Deep-Learning-Inference low-power applications