Your future can take off with the explosive growth of AI Inference and eFPGA
We are hiring to keep up with customer demand for Inference & eFPGA
We are venture-backed, have executives with proven startup and IPO experience, and technologists who won the ISSCC Outstanding paper award. You’ll learn faster, have a big impact and have fun at Flex Logix. Work with super talented hardware engineers who have worked on numerous high volume ICs on process nodes from 7nm to 180nm for dozens of companies from AMD to Zoran. Our software developers are just as talented and experienced: more than 1/3 of our technical team is software. And now we are building a Systems/PCB engineering team too.
Our new product nnMAX for AI inference has performance, power, cost advantages that are very large compared to existing solutions: we use MACs more efficiently meaning less silicon area, and we get throughput with 1/10th the DRAM bandwidth meaning lower system cost and power, and our solution works as well at batch size = 1 as large batches, unlike all other architectures. This makes nnMAX the optimal solution for the biggest AI market: bringing neural networks to “Edge” applications from cars to cameras to edge servers.
We are using nnMAX to tape-out our first chip in Q4/2019: the InferX X1 edge inference co-processor. It delivers throughput near that of data center products but at edge inference power (single digit watts) and pricing (double digit dollars). It is programmed using TensorFlow Lite or ONNX and a performance modeler is already available. InferX X1 will be available as a chip for edge systems AND as a PCIe card for servers.
We are the leaders in a new market: embedded FPGA or eFPGA. We are the “ARM of FPGA technology” providing the Semiconductor IP, Software Tools and Architecture/Applications tools to integrate eFPGA of any size, with any combination of features (DSP, RAM,etc) on any process node. We already are silicon proven on the most important nodes: TSMC 16/2, 28/22, 40 and GlobalFoundries 14/12; and we are working on TSMC 7/6. 10+ working chips use EFLX eFPGA and 10+ more chips are in fab/design (Datang/MorningCore, Boeing, SiFive, Sandia, DARPA, Harvard, and HiPer are ones we can mention publicly; Harvard and Sandia have presented papers on their working silicon) and will become pervasive in SoCs, MCU/IoT, Networking, Wireless/Base Stations, and Aerospace.
We have grown fast and need more great people to keep up with growing customer demand. We look for people with a proven track record, who are top performers, and who are passionate and entrepreneurial. Knowledge of Neural Networks is desirable but not required – we can train you if you are willing to learn.
We don’t care where you are from (our current team comes from North America, South America, Africa, Europe and Asia) but you must live now in Silicon Valley and preferably be a US Citizen or US Permanent Resident (have a “green card”), but we will also consider H-1B holders. Also, we have multiple women employees including management.
How to Apply
Send your resumes and contact info to email@example.com.
Only apply if you are highly qualified, very smart, super-motivated, and willing to work hard!