OpenAI is seeking a Technical Lead Software Engineer for their Inference team, focusing on high-performance model inference and infrastructure scaling. This role combines technical leadership with hands-on engineering, requiring expertise in CUDA optimization and distributed systems. The position offers a competitive compensation package of $460K-$685K plus equity and comprehensive benefits.
The role involves leading the design and implementation of core inference infrastructure for frontier AI models, optimizing CUDA-based systems, and collaborating with researchers to build scalable inference pipelines. The ideal candidate will have deep expertise in CUDA kernel optimization, experience with PyTorch and NVIDIA's GPU stack, and a strong background in large-scale ML infrastructure.
Working at OpenAI means joining a team dedicated to ensuring AI benefits humanity through their products. The company emphasizes safety and human needs at its core, offering opportunities to work on cutting-edge AI technology. The position includes comprehensive benefits such as medical insurance, mental health support, generous parental leave, and learning stipends.
The role requires both technical depth and leadership skills, as you'll be mentoring engineers and driving technical direction across teams. You'll work in San Francisco, collaborating with researchers, infrastructure teams, and product teams to deliver state-of-the-art AI models efficiently and reliably. This is an opportunity to shape the future of AI technology while working with some of the most advanced models and infrastructure in the field.