Flex Logix
InferX is

InferX Software makes AI Inference easy, provides more throughput on tough models, costs less, and requires less power.

AI Inference Acceleration

Top throughput on tough models.
More throughput for less $ & less watts.

Learn More

eFPGA

Accelerate workloads
& make your SoC flexible for changing needs.

eFPGA proven on 6/7, 12, 16, 22, 28, 40 & 180nm.

Learn More

Featured Events

GOMACTech, Hyatt Regency Miami, Miami, FL

GOMACTech, Hyatt Regency Miami, Miami, FL

Join us in reviewing developments in microcircuit applications for government systems at GOMAC in Hyatt Regency Miami, Miami, FL Come to our booth and see our presentation "Radiation Hardened Techniques for eFPGAs for Accelerating Different Applications"

Computer Vision Summit

Computer Vision Summit

Join us in San Jose, CA on April 26-27, 2022 for the Computer Vision Summit. In April 2022, the Computer Vision Summit will start it's Computer Vision world tour assembling leaders in Computer Vision from the world's largest organizations and most exciting startups in Silicon Valley to share success stories, experiences and challenges. https://computervisionsummit.com

SmartNICs Summit 2022

SmartNICs Summit 2022

Come visit us at the First Annual SmartNICs Summit.  It is the show network designers can’t miss if they want to stay competitive in the networking world. They’ll get the scoop on ways to make their networks run faster, scale better, use less power, and be more flexible. This unique event gives attendees a place to network with peers, ask questions of the experts, and talk to experts about how eFPGA can make your SmartNIC SOCs accelerate key functions and stay flexible.

Flex Logix™ Technology

PROGRAMMABLE INTERCONNECT

Inference and eFPGA are both data flow architectures. A single inference layer can take over a billion multiply-accumulates. Our Reconfigurable Tensor Processor reconfigures the 64 TPUs and RAM resources to efficiently implement a layer with a full bandwidth, dedicated data path, like an ASIC; then repeats this layer by layer. Flex Logix utilizes a new breakthrough interconnect architecture: less than half the silicon area of traditional mesh interconnect, fewer metal layers, higher utilization and higher performance. The ISSCC 2014 paper detailing this technology won the ISSCC Lewis Winner Award for Outstanding Paper. The interconnect continues to be improved resulting in new patents.

SUPERIOR SCALABILITY

We can easily scale up our Inference and eFPGA architectures to deliver compute capacity of any size. Flex Logix does this using a patented tiling architecture with interconnects at the edge of the tiles that automatically form a larger array of any size.

TIGHTLY COUPLED SRAM AND COMPUTE

SRAM closely couples with our compute tiles using another patented interconnect. Inference efficiency is achieved by closely coupling local SRAM with compute which is 100x more energy efficient than DRAM bandwidth. This interconnect is also useful for many eFPGA applications.

DYNAMIC TENSOR PROCESSOR

Our dynamic tensor processor features 64 one-dimensional tensor processors closely coupled with SRAM. The tensor processors are dynamically reconfigurable during runtime, using our proprietary interconnect, thus enabling implementation of multi-dimensional tensor operations as required for each layer of a neural network model, resulting in high utilization and high throughput.

SOFTWARE

Unlike solutions designed around AI model development and training, our Inference accelerator starts with a trained ML model, typically in ONNX format and generates a program that runs on our InferX accelerators. 

Our eFPGA compiler has been in use by dozens of customers for several years. Software drivers will be available for common Server OS and real time OS for MCUs and FPGAs.

InferX PCI Express and M.2 offerings

The InferX X1 processor is in production and is available now in PCI Express (HHHL), M.2 (M+B key) and chip level offerings.
 

SUPERIOR LOW-POWER DESIGN METHODOLOGY

Flex Logix has numerous architecture and circuit design technologies to deliver the highest throughput at the lowest power.

Featured Articles

Five AI Inference Trends for 2022

Five AI Inference Trends for 2022

It’s an exciting time to be a part of the rapidly growing AI industry, particularly in the field of inference. Once relegated simply to high-end and outrageously expensive computing systems, AI inference has been marching towards the edge at super-fast speeds. Today, customers in a wide range of industries – from medical, industrial, robotics, security, retail and imaging – are either evaluating or actually designing AI inference capabilities into their products and applications.

How Inferencing Differs From Training In Machine Learning Applications

How Inferencing Differs From Training In Machine Learning Applications

Why it’s so important to match the AI task to the right type of chip. Machine learning (ML)-based approaches to system development employ a fundamentally different style of programming than historically used in computer science. This approach uses example data to train a model to enable the machine to learn how to perform a task. ML training is highly iterative with each new piece of training data generating trillions of operations.

Edge inference accelerator has an eye on Megapixel vision systems

Edge inference accelerator has an eye on Megapixel vision systems

An edge inference accelerator developed by Flexlogix has a 4k MAC dynamic tensor processor array and is optimised for Mpixel image processing models in medical, surveillance and IoT applications.

Image Description