Google is seeking a TPU Microarchitecture Design Lead to join their innovative team developing next-generation technologies for machine learning acceleration. This role sits at the intersection of AI and hardware engineering, focusing on developing specialized processors (TPUs) that power Google's machine learning infrastructure.
The position offers an opportunity to lead technical teams in designing and implementing machine learning IPs for Google Silicon SoCs. You'll be responsible for defining microarchitecture details for ML processors and accelerators, overseeing RTL development, and ensuring successful delivery of complex hardware projects. The role requires deep expertise in computer architecture, particularly in the context of machine learning acceleration.
Working at Google's Silicon team, you'll be part of the organization that combines the best of Google AI, Software, and Hardware to create radically helpful experiences. The team researches, designs, and develops new technologies and hardware to make computing faster, seamless, and more powerful, with the ultimate goal of making people's lives better through technology.
The position offers a competitive compensation package, with a base salary range of $183,000-$271,000, plus bonus, equity, and comprehensive benefits. You'll have the opportunity to work from either Mountain View or San Diego, collaborating with some of the industry's brightest minds in machine learning hardware development.
The ideal candidate will bring at least 8 years of experience with RTL design and microarchitecture, along with proven leadership experience in IP/SoC design teams. This role represents an excellent opportunity for someone passionate about pushing the boundaries of machine learning hardware and leading teams in developing next-generation AI accelerators.
Google's commitment to innovation, combined with its resources and scale, makes this an exceptional opportunity to make a significant impact on the future of machine learning hardware. You'll be working on technologies that will be used by millions of people and will help advance the state of the art in AI acceleration.