News

Five AI Inference Trends for 2022

Five AI Inference Trends for 2022

It’s an exciting time to be a part of the rapidly growing AI industry, particularly in the field of inference. Once relegated simply to high-end and outrageously expensive computing systems, AI inference has been marching towards the edge at super-fast speeds. Today, customers in a wide range of industries – from medical, industrial, robotics, security, retail and imaging – are either evaluating or actually designing AI inference capabilities into their products and applications.

How Inferencing Differs From Training In Machine Learning Applications

How Inferencing Differs From Training In Machine Learning Applications

Why it’s so important to match the AI task to the right type of chip. Machine learning (ML)-based approaches to system development employ a fundamentally different style of programming than historically used in computer science. This approach uses example data to train a model to enable the machine to learn how to perform a task. ML training is highly iterative with each new piece of training data generating trillions of operations.

Edge inference accelerator has an eye on Megapixel vision systems

Edge inference accelerator has an eye on Megapixel vision systems

An edge inference accelerator developed by Flexlogix has a 4k MAC dynamic tensor processor array and is optimised for Mpixel image processing models in medical, surveillance and IoT applications.

Flex Logix named top Edge AI company to know in 2021

Flex Logix named top Edge AI company to know in 2021

Flex Logix is a reconfigurable computing company that provides AI Inference and eFPGA solutions that are based on software, silicon, and systems. The company is headquartered in Mountain View, CA. It is one of the top Edge AI companies that has recently announced the production availability of its InferX X1P1 accelerator board. The InferX X1P1 board offers the most efficient AI inference acceleration for edge workloads such as Yolov3.

On-Chip FPGA: the “other” compute resource

On-Chip FPGA: the “other” compute resource

Expanding the possibilities in the processor subsystem.

Flex Logix joins the Embedded AI+Vision Alliance

Membership will help support Flex Logix's rapid growth in the edge vision market with its inference accelerator chips and boards.

Flex Logix supplies eFPGA for DoD’s RAMP initiative.

Flex Logix supplies eFPGA for DoD’s RAMP initiative.

Bringing commercial innovations in chip design to national security.

Flex Logix accelerates growth with new office in Austin

Flex Logix accelerates growth with new office in Austin

Flex Logix Accelerates Growth With New Office In Austin; Prepares For Global Expansion Of Its Edge AI Inference Product Line The Company is Actively Hiring Both Software and Hardware Engineers

Reconfigurable Computing with Analog and MCUs

Reconfigurable Computing with Analog and MCUs

Watch the video to explore how a company with analog or MCU expertise combined with eFPGA saves their end customer power, cost and increase flexibility.

Getting better edge performance & efficiency from acceleration-aware ML model design

The advent of machine learning techniques has benefited greatly from the use of acceleration technology such as GPUs, TPUs and FPGAs. Indeed, without the use of acceleration technology, it’s likely that machine learning would have remained in the province of academia and not had the impact that it is having in our world today.