InferX - AI Inference Accelerator

InferX™ X1 Edge Inference Accelerator

High Throughput, Low Cost, Low Power​


The InferX X1 edge inference accelerator is optimized for large models and megapixel images at batch=1. It’s price/performance is much better than existing edge inference solutions. The InferX X1 tool flow takes pre-trained models represented in ONNX (Open Neural Network Exchange) format and generates the optimal mapping of the model to the X1 accelerator.  For more information on the X1 please contact us at:  info@flex-logix.com

 

InferX X1P1 Delivers More Throughput/$ Than Tesla T4, Xavier NX and Jetson TX2

The InferX X1P1 board offers the most efficient AI inference acceleration for edge AI workloads. Many customers need high-performance, low-power object detection and other high-resolution image processing capabilities for robotic vision, security, retail analytics and many other applications.  The X1P1 Delivers.

For more details contact us at:  info@flex-logix.com

See how reconfigurability enables high efficiency inference acceleration.

Image Description

X1M allows you to put high performance AI inference anywhere

M.2 form factor brings inference to mechanical and power constrained applications.  Provides faster path to production ramp vs. custom card design.

InferX Software makes AI Inference easy!

See Jeremy Roberson’s Linley presentation HERE. Watch the video of the presentation HERE

Linley Gwennap

The Linley Group

Image Description

Kevin Krewell

Tirias Research

Image Description