The Annapurna Labs team at Amazon Web Services (AWS) builds AWS Neuron, the software development kit used to accelerate deep learning and GenAI workloads on Amazon's custom machine learning accelerators, Inferentia and Trainium. As a Sr. ML Kernel Performance Engineer, you'll be part of the Acceleration Kernel Library team, working at the hardware-software boundary to craft high-performance kernels for ML functions.
The role involves working with AWS's custom ML accelerators, ensuring optimal performance for customer workloads. You'll be part of the broader Neuron Compiler organization, working across multiple technology layers - from frameworks and compilers to runtime and collectives. The position offers a unique opportunity to work at the intersection of machine learning, high-performance computing, and distributed architectures.
Key responsibilities include designing and implementing high-performance compute kernels, analyzing and optimizing kernel-level performance, conducting detailed performance analysis, and working directly with customers to enable and optimize their ML models. You'll collaborate with compiler, runtime, framework, and hardware teams to push the boundaries of AI acceleration technology.
The team values work-life balance and operates in an environment that encourages innovation and experimentation. You'll have the opportunity to architect business-critical features, publish cutting-edge research, and mentor experienced engineers. The position offers competitive compensation ranging from $151,300 to $261,500 per year, depending on location, plus equity and comprehensive benefits.
This role is perfect for someone passionate about performance optimization, machine learning systems, and working at the cutting edge of AI acceleration technology. You'll be part of a diverse, inclusive team culture that embraces differences and values continuous learning and innovation.