Resources
InferX™ X1
- InferX X1 Product Brief - Product Overview of InferX1 AI Accelerator
- Linley Fall Processor Conference: a Flexible and Powerful Architecture for Edge AI
- Linley Spring Processor Conference: Easy to use X1 Inference Compiler Software
- Linley Spring Processor Conference: Low Power X1 M.2 Card
- Microprocessor Report Article

Speeding Up AI Algorithms
AI at the edge is very different than AI in the cloud. Salvador Alvarez, solution architect director at Flex Logix, talks about why a specialized inferencing chip with built-in programmability is more efficient and scalable than a general-purpose processor, why high-performance models are essential for getting accurate real-time results, and how low power and ambient temperatures can affect the performance and life expectancy of these devices.

Q&A with Sam Fuller from FlexLogix - InferX and computer vision applications
We caught up with Sam to discuss what Flex Logix does, what the InferX platform is, how both the company and the platform differ from the competition, how easy it is to port models to the InferX platform, and more.

Product of the Week: Flex Logix InferX X1M Edge Inference Accelerator
Every type of edge AI has three hard and fast technical requirements: low power, small form factor, and high performance. Of course, what constitutes “small,” “power efficient,” or “high performance” varies by use case and can describe everything from small microcontrollers to edge servers, but usually you must sacrifice at least one to get the others. However, one solution that can address everything from edge clouds to endpoints without sacrifice is the FPGA.