Taro Logo

GPU Computing Capacity Optimization Engineer

NVIDIA is the world leader in accelerated computing. NVIDIA pioneered accelerated computing to tackle challenges no one else can solve. Our work in AI and digital twins is transforming the world's largest industries and profoundly impacting society.
Santa Clara, CA, USAWestford, MA 01886, USAAustin, TX, USA
$148,000 - $276,000
Backend
Senior Software Engineer
Hybrid
5,000+ Employees
5+ years of experience
AI · Enterprise SaaS
This job posting is no longer active. 😔

Job Description

NVIDIA has continuously reinvented itself over two decades. Our invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined modern computer graphics, and revolutionized parallel computing. More recently, GPU deep learning ignited modern AI — the next era of computing. NVIDIA is a "learning machine" that constantly evolves by adapting to new opportunities that are hard to solve, that only we can address, and that matter to the world.

As a member of the GPU AI/HPC Infrastructure team, you will provide leadership in the design and implementation of ground-breaking GPU compute clusters that run demanding deep learning, high performance computing, and computationally intensive workloads. In this role, we seek an expert to optimize the Capacity management and allocation in GPU Compute Clusters. You will help us with the strategic challenges we encounter in maximizing and optimizing our usage of all datacenter resources including compute, storage, network, and power. You will help build methodologies, tools, and metrics to enable effective resource utilization in a heterogeneous compute environment, and assist with growth planning across our global computing environment.

What you'll be doing:

  • Building and improving our ecosystem around GPU-accelerated computing including developing large scale automation solutions
  • Supporting our researchers to run their flows on our clusters including performance analysis and optimizations of deep learning workflows
  • Diagnosing customer utilization deficiencies and job scheduling issues
  • Building automation, tools, and metrics to help us increase productive utilization of resources
  • Collaborating with the scheduler team to improve scheduling algorithms
  • Root cause analysis and suggest corrective action for problems at large and small scales
  • Finding and fixing problems before they occur

What we need to see:

  • Bachelor's degree in Computer Science, Electrical Engineering or related field or equivalent experience
  • Minimum 5+ years of experience designing and operating large scale compute infrastructure
  • Experience analyzing and tuning performance for a variety of AI/HPC workloads
  • Working knowledge of cluster configuration management tools such as Ansible, Puppet, Salt
  • Experience with AI/HPC advanced job schedulers, and ideally familiarity with schedulers such as Slurm, K8s, RTDA or LSF
  • Familiarity with container technologies like Docker, Singularity, Shifter, Charliecloud
  • Proficient in Python programming and bash scripting
  • Experience with AI/HPC workflows that use MPI

Ways to stand out from the crowd:

  • Experience with NVIDIA GPUs, CUDA Programming, NCCL and MLPerf benchmarking
  • Experience with Machine Learning and Deep Learning concepts, algorithms, and models
  • Proficient in CentOS/RHEL and/or Ubuntu Linux distros
  • Familiarity with InfiniBand with IBOP and RDMA as well as understanding of fast, distributed storage systems like Lustre and GPFS for AI/HPC workloads
  • Familiarity with deep learning frameworks like PyTorch and TensorFlow

NVIDIA offers highly competitive salaries and a comprehensive benefits package. We have some of the most brilliant and talented people in the world working for us and, due to unprecedented growth, our world-class engineering teams are growing fast. If you're a creative and autonomous engineer with real passion for technology, we want to hear from you.

Last updated a year ago

Responsibilities For GPU Computing Capacity Optimization Engineer

  • Provide leadership in the design and implementation of GPU compute clusters
  • Optimize Capacity management and allocation in GPU Compute Clusters
  • Build and improve ecosystem around GPU-accelerated computing
  • Support researchers in running and optimizing deep learning workflows
  • Diagnose customer utilization deficiencies and job scheduling issues
  • Build automation, tools, and metrics to increase resource utilization
  • Collaborate with scheduler team to improve scheduling algorithms
  • Perform root cause analysis and suggest corrective actions
  • Proactively find and fix potential problems

Requirements For GPU Computing Capacity Optimization Engineer

Python
Linux
Kubernetes
  • Bachelor's degree in Computer Science, Electrical Engineering or related field or equivalent experience
  • Minimum 5+ years of experience designing and operating large scale compute infrastructure
  • Experience analyzing and tuning performance for a variety of AI/HPC workloads
  • Working knowledge of cluster configuration management tools such as Ansible, Puppet, Salt
  • Experience with AI/HPC advanced job schedulers (e.g., Slurm, K8s, RTDA, LSF)
  • Familiarity with container technologies (Docker, Singularity, Shifter, Charliecloud)
  • Proficient in Python programming and bash scripting
  • Experience with AI/HPC workflows that use MPI

Benefits For GPU Computing Capacity Optimization Engineer

Equity
  • Competitive salaries
  • Comprehensive benefits package
  • Equity