NVIDIA, the world leader in accelerated computing, is seeking a Distinguished Compiler Engineer to join their Deep Learning Compiler Technology team. This role represents a unique opportunity to shape the future of AI computing at one of technology's most innovative companies.
The position requires a seasoned professional with 16+ years of experience in compiler optimizations and computer architecture, targeting someone with advanced education (Masters/PhD) in Computer Science or related fields. You'll be at the forefront of developing cutting-edge compiler technology for NVIDIA's next-generation GPUs, specifically focused on deep learning applications.
As a Distinguished Engineer, you'll lead technical development of kernel generation and optimizations for computational graphs, working on both inference and training workloads. Your work will directly impact the entire deep learning community, as you collaborate with experts across software, hardware, and research divisions to co-design next-generation chips.
The role offers an exceptional compensation package, with a base salary ranging from $308,000 to $471,500 USD, plus equity and comprehensive benefits. NVIDIA's commitment to innovation in AI and digital twins is transforming major industries, making this an opportunity to work on truly groundbreaking technology.
You'll be joining a dynamic, product-oriented team where your expertise in compiler optimizations, synthesis, and computer architecture will be highly valued. The position offers significant technical challenges and the chance to mentor early-career engineers. NVIDIA's culture promotes creativity and autonomy, making it an ideal environment for experienced engineers passionate about pushing technological boundaries.
Working at NVIDIA's offices in either Santa Clara, CA or Redmond, WA, you'll be part of a company widely recognized as one of the technology world's most desirable employers. The role combines deep technical work with leadership opportunities, perfect for someone who wants to make a lasting impact on the future of AI computing.