Joining NVIDIA's AI Efficiency Team means contributing to the infrastructure that powers our innovative AI research. This team focuses on optimizing efficiency and resiliency of AI workloads, as well as developing scalable AI and Data infrastructure tools and services. Our objective is to deliver a stable, scalable environment for AI researchers, providing them with the necessary resources and scale to foster innovation.
As a Senior SRE at NVIDIA, you will have the opportunity to work on innovative technologies that power the future of AI and data science, and be part of a dynamic and supportive team that values learning and growth. The role provides the autonomy to work on meaningful projects with the support and mentorship needed to succeed, and contributes to a culture of blameless postmortems, iterative improvement, and risk-taking.
Responsibilities:
- Develop software solutions to ensure reliability and operability of large-scale systems supporting machine-critical use cases.
- Gain a deep understanding of our system operations, scalability, interactions, and failures to identify improvement opportunities and risks.
- Create tools and automation to reduce operational overhead and eliminate manual tasks.
- Establish frameworks, processes, and standard methodologies to enhance operational maturity, team efficiency, and accelerate innovation.
- Define meaningful and actionable reliability metrics to track and improve system and service reliability.
- Oversee capacity and performance management to facilitate infrastructure scaling across public and private clouds globally.
- Build tools to improve our service observability for faster issue resolution.
- Practice sustainable incident response and blameless postmortems.
Requirements:
- Minimum of 6 years of experience in SRE, Cloud platforms, or DevOps with large-scale microservices in production environments.
- Bachelor's degree or equivalent experience.
- Strong understanding of SRE principles, including error budgets, SLOs, and SLAs.
- Experience with AI training and inferencing and data infrastructure services.
- Expertise in building and operating large-scale observability platforms for monitoring and logging (e.g., ELK, Prometheus, Loki).
- Proficiency in programming languages such as Python, Go, Perl, or Ruby.
- Hands-on experience with scaling distributed systems in public, private, or hybrid cloud environments.
- Experience in deploying, supporting, and supervising services, platforms, and application stacks.
- Knowledge of CI/CD systems, such as GitLab and Familiarity with Infrastructure as Code (IaC) methodologies and tools.
- Excellent communication and collaboration skills.
Nice to have:
- Good understanding of DL frameworks, orchestrators like PyTorch, TensorFlow, JAX, and Ray.
- Strong background in software design and development.
- Experience operating large-scale distributed systems with strong SLAs.
- Extensive experience in operating data platforms.
- Proficiency in incident, change, and problem management processes.