Skip to content
Flex Logix Boosts AI Accelerator Performance and Long-term Efficiency
Read more
Search
Search
Close this search box.
Home
Reconfigurable Use Cases & Markets
Crypto Agility
Improved AI Accelerator Memory Bandwidth & Future Proofing
Microcontrollers and SoCs
Communications & Networking
Computational Storage
Automotive
Aerospace & Defense
Cloud computing
Robust ASIC Architectures
Other Applications
EFLX eFPGA
Overview
Hardware
Modular Architecture
Process Nodes
Patented Interconnects
Power vs Performance
Reconfigurable Block RAM
Competitive Advantage
ASIC/Design Services
Modular FPGA Programming
Rapid Reconfiguration
Made in USA Including Rad Hard
Software
eXpreso
Synthesis
EFLX Compiler
Integrate our SW in your tool chain
Resources
InferX DSP/SDR
Overview
Software
Super fast, accurate, reconfigurable DSP
High Level DSP Programming
InferX Operators and Precision
Softlogic DSP Operators
Hardware
Super fast, accurate, reconfigurable DSP
InferX for advanced nodes scales to 10K-100K int16 MACs
InferX for existing EFLX scales up to 16 TPUs = 2K int16 MACs
InferX TPU Architecture
InferX+EFLX Heterogeneous Compute Fabric
Resources
Overview
Product Briefs
InferX AI
Overview
Software
Orin AGX in SoC
High Accuracy
Super-Resolution
Inference Compiler
APIs
Hardware
World Class AI
InferX Subsystem
InferX Tile
Resources
Overview
Product Briefs
About
Leadership
Board
Inventions
Awards
News
Events
Careers
Contact
InferX
™
AI
Overview
Software
Hardware
Resources
Overviews
InferX Product Overview
Product Brief on our InferX AI HW and SW technologies
Presentations
Architectural Challenges for Transformer Models
Videos
Improving Image Resolution At The Edge
Flex Logix Demonstration of Its InferX IP for AI Inference Implementing Object Detection at the Edge
Flex Logix Introduction to Its Latest-generation InferX IP for AI Inference at the Edge