Google is seeking a Silicon Networking Microarchitecture and RTL Lead for their Cloud division, focusing on shaping the future of AI/ML hardware acceleration. This role is part of the ML, Systems, & Cloud AI (MSCA) organization, which is responsible for designing and implementing hardware, software, and infrastructure for Google services and Google Cloud.
The position involves working on cutting-edge TPU (Tensor Processing Unit) technology that powers Google's most demanding AI/ML applications. You'll be developing custom silicon solutions for data center acceleration, collaborating with various teams including architecture, verification, power and performance, and physical design.
As a lead, you'll be responsible for driving innovative micro-architecture and practical logic solutions, evaluating design options with complexity, performance, power, and area considerations. The role requires expertise in ASIC development, design verification, and deep knowledge of networking domain concepts like packet processing and congestion control.
The impact of this role extends to Google's global infrastructure, contributing to the development of technologies that power services used by billions of people worldwide. You'll be working on projects that directly influence Google Cloud's AI platforms, including Vertex AI, which brings Gemini models to enterprise customers.
This is an excellent opportunity for experienced professionals who want to work at the intersection of hardware design and AI/ML technologies, contributing to next-generation data center accelerators while being part of a team that prioritizes security, efficiency, and reliability in hyperscale computing.