Join Google Cloud as a Silicon Subsystems RTL Design Engineer and shape the future of AI/ML hardware acceleration. This role focuses on developing cutting-edge TPU (Tensor Processing Unit) technology that powers Google's most demanding AI/ML applications. You'll be part of an innovative team developing custom silicon solutions for Google's TPU, contributing to products used by millions worldwide.
The position involves working on ASICs used to accelerate and improve data center traffic. You'll collaborate with various teams including architecture, verification, power and performance, and physical design to deliver next-generation data center accelerators. The role requires solving technical problems through innovative micro-architecture and practical logic solutions, while considering complexity, performance, power, and area trade-offs.
As part of the ML, Systems, & Cloud AI (MSCA) organization, you'll work on infrastructure that supports all Google services and Google Cloud. The team prioritizes security, efficiency, and reliability while pushing the boundaries of hyperscale computing. This includes work on Google Cloud's Vertex AI, the leading AI platform for enterprise customers using Gemini models.
The ideal candidate will have strong experience in ASIC development, verification, and subsystem design. Knowledge of high-performance computing, low power design techniques, and familiarity with processor design and memory hierarchies is valuable. This is an opportunity to work at the forefront of AI hardware development while contributing to Google's global impact in both software and hardware domains.