Cerebras Systems, a pioneering company in AI hardware, is seeking an AI Infrastructure Operations Engineer to join their team. The company is known for creating the world's largest AI chip, which is 56 times larger than conventional GPUs, providing unprecedented AI compute power with the simplicity of a single device management.
The role offers a unique opportunity to work with cutting-edge technology, specifically the Wafer-Scale Engine (WSE), and the systems that harness its unparalleled power. As an AI Infrastructure Operations Engineer, you'll be responsible for managing and operating advanced AI compute infrastructure clusters, ensuring optimal performance and availability.
The position requires a deep understanding of Linux-based systems, containerization technologies, and experience with monitoring and troubleshooting complex distributed systems. With 6-8 years of relevant experience required, the ideal candidate should be proficient in Python scripting, have extensive knowledge of Docker and container orchestration platforms, and be comfortable with 24/7 on-call rotations.
Working at Cerebras means being at the forefront of AI technology advancement. The company has established partnerships with global corporations, national labs, and healthcare systems, including a multi-year, multi-million-dollar partnership with Mayo Clinic. In 2023, they launched Cerebras Inference, the fastest Generative AI inference solution globally.
The company offers a unique work environment that combines the stability of an established company with the vitality of a startup. They promote a simple, non-corporate work culture that respects individual beliefs and encourages continuous learning and growth. Team members have the opportunity to work on one of the fastest AI supercomputers in the world and contribute to cutting-edge AI research.
This role is available in multiple locations including Sunnyvale, CA, Toronto, Canada, and Bangalore, India, offering flexibility in terms of work location. The position is ideal for someone who is passionate about AI infrastructure, enjoys solving complex technical challenges, and wants to be part of a team that's pushing the boundaries of what's possible in AI computing.