Join Google's ML, Systems, & Cloud AI (MSCA) organization as a Silicon Networking Microarchitecture and RTL Lead, where you'll shape the future of AI/ML hardware acceleration. This role focuses on developing cutting-edge TPU (Tensor Processing Unit) technology that powers Google's most demanding AI/ML applications. You'll be instrumental in developing custom silicon solutions for data center accelerators, working with complex digital designs and TPU architecture integration within AI/ML systems.
The position requires deep expertise in ASIC development, with a focus on networking domain microarchitecture and RTL design. You'll collaborate with cross-functional teams including architecture, verification, power and performance, and physical design to deliver high-quality solutions for next-generation data center accelerators. The role demands innovative problem-solving skills to develop practical logic solutions while considering complexity, performance, power, and area constraints.
As part of the MSCA organization, you'll contribute to the infrastructure that supports all Google services and Google Cloud. The team prioritizes security, efficiency, and reliability while pushing the boundaries of hyperscale computing. Your work will have global impact, contributing to projects like Google Cloud's Vertex AI platform and bringing Gemini models to enterprise customers.
This is an excellent opportunity for experienced professionals who want to work at the cutting edge of hardware development for AI/ML applications, combining deep technical expertise with the chance to impact billions of users worldwide through Google's services and cloud infrastructure.