As a member of the Cloud-Scale Machine Learning Acceleration team at Amazon's Annapurna Labs, you'll be responsible for designing and optimizing hardware for AWS data centers, including the AWS Inferentia machine learning inference datacenter server. This role combines hardware engineering with cutting-edge machine learning infrastructure development. You'll work on building next-generation cloud server infrastructure using emulation platforms, collaborating with various teams to determine platform requirements and ensure high-quality design delivery.
The position requires expertise in emulation engineering, system validation, and hardware design. You'll be developing testbenches, integrating ViP components into SoC designs, and ensuring functional correctness of emulation models. The role involves working with state-of-the-art emulation tools from Zebu, Cadence, or Veloce, and requires proficiency in languages like System Verilog, C++, and Python.
This is an opportunity to join one of the world's leading tech companies and work on advanced Machine Learning Accelerators. The compensation is competitive, ranging from $129,800 to $212,800 based on location and experience, plus additional benefits including equity and sign-on bonuses. The position is based in Cupertino, CA, putting you at the heart of Silicon Valley's tech innovation.
The ideal candidate will have a strong background in hardware engineering, excellent debugging skills, and the ability to work effectively with interdisciplinary teams. This role offers the chance to impact the future of cloud computing and machine learning infrastructure while working with cutting-edge technology and talented professionals in a collaborative environment.