AI Inferencing

Understanding AI Inferencing

Modern Artifical Intelligence (AI) systems use a paradigm called machine learning.  Machine learning (ML) is typically composed of both training and inferencing components.  Training is the highly computationally intensive process where the machine (computer) learns how to perform a task.  ML Training and is usually performed in very large scale cloud computing systems and can take a very long time to process (weeks or months) even when running on very high performance hardware.  The output of the training process, a trained ML model, can be leveraged across many systems in the form of inference processing.  Inference processing or inferencing is the term that is used to refer to the process of providing a response to a stimulus based on training from example data sets.  Example inferencing tasks include object or face detection in images or video, understanding human speech, or identifying cancerous cells in X-Ray images.  

AI Training

The AI training processing can be very computationally intensive and take weeks or months to complete running on large scale data center server.  

AI Inferencing

AI Inferencing at the Edge is historicallly accomplished with GPU-based accelerator solutions.  AI Inferencing doesn't require the same high performance data center class systems needed for AI training but does require much higher performance than is available from standard CPU processors. GPU-based solutions are difficult to program are expensive and power hungry.  For AI inferencing to flourish a new solution is required.

InferX Provides the Best Inference Solution

Inference processing when properly accelerated requires much less processing and can typically be performed in a fraction of a second when using InferX AI acceleration technology.

The Flex Logix InferX AI acceleration technology is designed to provide acceleration of AI applications at the Edge of the Internet.  Edge devices typically have stringent power dissipation, size and cost requirements.  The InferX technology is able to compress the trillions of operations required for performing AI inferencing into a very compact and efficient AI accelerator bringing AI capabilities, like real-time vision, that would have required a super computer just a few years ago within the reach of any company's budget.




Flex-Logix Solutions for Edge Inferencing

InferX Family of Edge Inferencing Solutions

InferX DK Software makes AI Inference easy!

InferX DK Software makes AI Inference easy!

See Jeremy Roberson’s Linley presentation HERE. Watch the video of the presentation HERE

Learn More
X1M allows you to put high performance AI inference anywhere

X1M allows you to put high performance AI inference anywhere

See Cheng Wang’s Linley presentation HERE. Watch the video of the presentation HERE

Learn More
InferX X1 Delivers More Throughput/$ Than Tesla T4, Xavier NX and Jetson TX2

InferX X1 Delivers More Throughput/$ Than Tesla T4, Xavier NX and Jetson TX2

Inference optimized solutions like the InferX X1 are designed to be very silicon efficient.  When compared to GPU appoaches the silicon savings are significant.  

Learn More

Featured Articles

Silicon Catalyst welcomes  Flex Logix as an In-Kind Partner

Silicon Catalyst welcomes Flex Logix as an In-Kind Partner

Silicon Catalyst, the world’s only incubator focused exclusively on accelerating semiconductor solutions, is pleased to announce that Flex Logix® has joined as the newest member of its In-Kind Partner program (IKP). Portfolio companies in the Silicon Catalyst Incubator will have access to Flex Logix’s innovative embedded FPGA (eFPGA) IP and software, enabling silicon reconfigurability for use in their chip designs.

Flex Logix Partners With Roboflow to Enable Specialized AI Models for Computer Vision Applications

Flex Logix Partners With Roboflow to Enable Specialized AI Models for Computer Vision Applications

The availability of AI models optimized for the Flex Logix InferX accelerator enables edge device manufacturers to get to market quickly, reliably and affordably.

Speeding Up AI Algorithms

Speeding Up AI Algorithms

AI at the edge is very different than AI in the cloud. Salvador Alvarez, solution architect director at Flex Logix, talks about why a specialized inferencing chip with built-in programmability is more efficient and scalable than a general-purpose processor, why high-performance models are essential for getting accurate real-time results, and how low power and ambient temperatures can affect the performance and life expectancy of these devices.