BereaOnline.com Name Logo, Blue Letters

💻 Ex-Google TPU Engineers Raise $500M for MatX, an Nvidia Challenger Focused on LLM Training

BEREA, Ky. — MatX, a startup founded by two former Google TPU engineers, says it has raised a $500 million Series B to build a new processor aimed at training large language models (LLMs), positioning itself as a direct challenger to Nvidia’s dominance in AI hardware.

The round was led by Jane Street and Situational Awareness (an investment firm founded by former OpenAI researcher Leopold Aschenbrenner), and included investors such as Marvell Technology, Spark Capital, and Stripe co-founders Patrick and John Collison.


🚀 The 10x Ambition

The company’s pitch is ambitious. MatX says its goal is to make processors “10 times better” at training LLMs than Nvidia’s GPUs—a claim that has been circulating widely because it is so stark and easy to repeat. That “10x” goal is directly attributed to the company’s stated aim for massive throughput and training performance.

MatX co-founder and CEO Reiner Pope previously led AI software development for Google’s TPUs, while co-founder and CTO Mike Gunter was a lead designer of the TPU hardware. The pair left Google in 2022 to build a chip focused entirely on the specific demands of large language models.


🔬 Under the Hood: The “MatX One”

In a company post announcing the Series B, Pope said MatX is building what it calls the “MatX One,” an LLM-focused chip designed for high throughput and extremely low latency.

The post describes an architecture based on a “splittable systolic array.” The design attempts to blend the best of two worlds: the sub-nanosecond latency characteristics of SRAM-first designs, combined with the High Bandwidth Memory (HBM) support required for handling massive context windows. The company says it is targeting tapeout in under a year.


📅 The 2027 Competitive Landscape

On the timeline, MatX plans to manufacture its chips with TSMC and start shipping in 2027. That date matters immensely because the competitive landscape shifts quickly. If MatX ships on schedule, it will be competing against Nvidia’s next-generation systems (like the upcoming Rubin architecture), not today’s hardware.

The bigger story here is that investors are still willing to fund new silicon teams at an enormous scale if they believe there is room for a viable alternative to Nvidia in frontier AI training. MatX is betting it can win by aggressively narrowing its target. Its public materials explicitly prioritize large models and de-prioritize smaller or more general workloads—which is one of the only ways new chip companies can try to beat an incumbent that is forced to serve everyone.


🔗 Where to Read More


🖊️ About the Author

Chad Hembree is a certified network engineer with 30 years of experience in IT and networking. He hosted the nationally syndicated radio show Tech Talk with Chad Hembree throughout the 1990s and into the early 2000s, and previously served as CEO of DataStar. Today, he is based in Berea as the Executive Director of The Spotlight Playhouse, proof that some careers don’t pivot, they evolve.

BereaOnline.com: Covering Berea, KY News and Events Since 1995