Grow Your Future: Neural Inference & eFPGA
We are hiring to keep up with customer demand for Inference & eFPGA
We are venture-backed, have executives with proven startup and IPO experience, and technologists who won the ISSCC Outstanding paper award. You’ll learn faster, have a big impact and have fun at Flex Logix. Work with super talented hardware engineers who have worked on numerous high volume ICs on process nodes from 180nm to 7nm for dozens of companies from AMD to Zoran. Our software developers are just as talented and experienced: more than 1/3 of our technical team is software.
Our new product nnMAX for neural inference has performance, power, cost advantages that are very large compared to existing solutions: we use MACs more efficiently meaning less silicon area, and we get throughput with 1/10th the DRAM bandwidth meaning lower system cost and power, and our solution works as well at batch size = 1 as large batches, unlike all other architectures. This makes nnMAX the optimal solution for the biggest AI market: bringing neural networks to “Edge” applications from cars to cameras to edge servers.
We are using nnMAX to tape-out our first chip in Q3: the InferX X1 edge inference co-processor. It delivers throughput near that of data center products but at edge inference power (single digit watts) and pricing (double digit dollars). It is programmed using TensorFlow Lite or ONNX and a performance modeler is already available.
We are the leaders in a new market: embedded FPGA or eFPGA. We are the “ARM of FPGA technology” providing the Semiconductor IP, Software Tools and Architecture/Applications tools to integrate eFPGA of any size, with any combination of features (DSP, RAM,etc) on any process node. We already are silicon proven on the most important nodes: TSMC 16, 28, 40; and we are working on TSMC 7 and have taped-out GF 14. Our technology is in fab and in design at multiple customers (Datang/MorningCore, Boeing, SiFive, Sandia, DARPA, Harvard, and HiPer are ones we can mention publicly; Harvard and Sandia have presented papers on their working silicon) and will become pervasive in SoCs, MCU/IoT, Networking, Wireless/Base Stations, and Aerospace.
We have grown fast but we need more great people to keep up with growing customer demand. We look for people with a proven track record, who are top performers, and who are passionate and entrepreneurial. Knowledge of Neural Networks is desirable but not required – we can train you if you are willing to learn.
We don’t care where you are from (our current team comes from North America, South America, Africa, Europe and Asia) but you must live now in Silicon Valley and preferably be a US Citizen or US Permanent Resident (have a “green card”), but we will also consider H-1B holders. Also, we have multiple women employees including management.
How to Apply
Send your resumes and contact info to firstname.lastname@example.org.
Only apply if you are highly qualified, very smart, super-motivated, and willing to work hard!