Google is seeking an ML Hardware Architecture Modeling and Co-design Engineer to shape the future of AI/ML hardware acceleration. This role focuses on developing cutting-edge TPU (Tensor Processing Unit) technology that powers Google's most demanding AI/ML applications. As part of the ML, Systems, & Cloud AI (MSCA) organization, you'll work with hardware and software architects to model, analyze, and define next-generation TPUs.
The position involves working on custom silicon solutions that power Google's TPU, contributing to products used by millions worldwide. You'll be responsible for Machine Learning workload characterization, benchmarking, and hardware-software co-design, as well as conducting performance and power analyses. The role requires collaboration with various teams including hardware design, software, compiler, and ML research teams.
The ideal candidate should have a strong background in computer architecture performance analysis and experience with C++ or Python development. Knowledge of processor design, accelerator designs, and mapping ML models to hardware architectures is highly valued. This is an excellent opportunity for someone passionate about pushing the boundaries of AI hardware acceleration and working on technology that impacts billions of users through Google's services and Cloud platform.
The position offers competitive compensation including base salary, bonus, equity, and comprehensive benefits. Google maintains a strong commitment to diversity, equality, and creating an inclusive workplace environment. This role represents a unique opportunity to work at the intersection of hardware architecture and machine learning, contributing to the next generation of AI acceleration technology.