Reconfigurable computing

InferX™ X1 is the fastest and most efficient Edge AI Inference Accelerator

Flex Logix™ is the leading provider of embedded FPGA IP and software.

We are growing to support customer demand

New Austin TX Office Open


New Mt. View CA office Open


Top throughput on tough models.
More throughput for less $ & less watts.



Accelerate workloads
& make your SoC flexible for changing needs.

eFPGA proven on 12, 16, 22, 28, 40 & 180nm.

Flex Logix™ Technology

Programmable Interconnect

Inference and eFPGA are both data flow architectures. A single inference layer can take over a billion multiply-accumulates. Our Reconfigurable Tensor Processor reconfigures the 64 TPUs and RAM resources to efficiently implement a layer with a full bandwidth, dedicated data path, like an ASIC; then repeats this layer by layer. Flex Logix utilizes a new breakthrough interconnect architecture: less than half the silicon area of traditional mesh interconnect, fewer metal layers, higher utilization and higher performance. The ISSCC 2014 paper detailing this technology won the ISSCC Lewis Winner Award for Outstanding Paper. The interconnect continues to be improved resulting in new patents.

Superior Scalability

We can easily scale up our Inference and eFPGA architectures to deliver compute capacity of any size. Flex Logix does this using a patented tiling architecture with interconnects at the edge of the tiles that automatically form a larger array of any size.


SRAM closely couples with our compute tiles using another patented interconnect. Inference efficiency is achieved by closing coupling local SRAM with compute which is 100x more energy efficient than DRAM bandwidth. This interconnect is also useful for many eFPGA applications.


Our reconfigurable tensor processor features 64 one-dimensional tensor processors closely coupled with SRAM which are reconfigurable, using our proprietary interconnect, enabling implementation of multi-dimensional tensor operations as required for each layer of a neural network model, resulting in high utilization and high throughput.


Our inference compiler takes Tensorflow Lite and ONNX models to program our inference architecture using our eFPGA compiler in the back end. A performance modeler for our inference architecture is available now. Our eFPGA compiler has been in use by dozens of customers for several years. Software drivers will be available for common Server OS and real time OS for MCUs and FPGAs.


Flex Logix chip products will be available in PCIe Card format for Edge Servers and Gateways. M.2 is in development.

Superior Low-Power Design Methodology

Flex Logix has numerous architecture and circuit design technologies to deliver the highest throughput at the lower power.