AWS Machine Learning accelerators are at the forefront of AWS innovation, focusing on building Generative AI solutions. The role is within the Amazon Annapurna Labs team, which leads silicon development at AWS. As a Machine Learning Compiler Engineer II on the AWS Neuron team, you'll be instrumental in developing a compiler for handling large-scale ML workloads. The position involves working with cutting-edge technology like the Inferentia chip, which delivers best-in-class ML inference performance, and Trainium for ML training performance.
The team is developing a sophisticated deep learning compiler stack that transforms neural network descriptions from frameworks like TensorFlow, PyTorch, and MXNET into executable code. You'll be working alongside experienced engineers and researchers, contributing to a toolchain aimed at achieving significant performance improvements.
The role offers excellent work-life balance, with flexibility in working hours and a strong emphasis on personal development. You'll have opportunities for mentorship and career growth within a team that values knowledge sharing. The position involves collaboration with AWS ML services teams, participation in pre-silicon design, and bringing new products to market.
The team culture emphasizes inclusion and diversity, with access to employee-led affinity groups and innovative benefit offerings. Amazon's 16 Leadership Principles guide the team's approach to seeking diverse perspectives and building trust. The position offers competitive compensation based on geographic location and includes comprehensive benefits.
This role is perfect for someone passionate about compiler optimization, machine learning, and hardware acceleration, offering the chance to work on technology that powers major customers like Snap, Autodesk, Amazon Alexa, and Amazon Rekognition.