Cerebras Systems, a pioneer in AI hardware, is seeking a Distributed Software Engineer to join their innovative team. The company has built the world's largest AI chip, 56 times larger than GPUs, revolutionizing AI compute power. Their novel wafer-scale architecture provides the computing capability of dozens of GPUs on a single chip, simplifying machine learning applications management.
The role focuses on building and maintaining the software infrastructure for Cerebras' multi-exaflop supercomputers, which are deployed in major datacenters. As part of the Cluster engineering team, you'll be responsible for developing critical software components that manage their Wafer-Scale Cluster technology.
The position offers an opportunity to work on cutting-edge technology at the intersection of distributed systems and AI. Cerebras has established partnerships with global corporations, national labs, and healthcare systems, including a multi-year, multi-million-dollar collaboration with Mayo Clinic. The company recently launched Cerebras Inference, the fastest Generative AI inference solution globally.
The ideal candidate will bring strong expertise in distributed systems, cluster management, and software architecture. You'll be working with modern technologies including Kubernetes, GoLang, and Python, while building systems that operate at an unprecedented scale. This is a chance to contribute to groundbreaking advancements in AI computing infrastructure while working in a non-corporate culture that values individual beliefs and innovation.
Join Cerebras to be part of a team that's pushing the boundaries of what's possible in AI computing, with the stability of an established company combined with the energy and innovation of a startup.