Join Google Cloud's TPU (Tensor Processing Unit) team as a Silicon RTL Design Engineer, where you'll be at the forefront of AI/ML hardware acceleration. This role offers the opportunity to shape cutting-edge TPU technology that powers Google's most demanding AI/ML applications. You'll be part of a diverse team developing custom silicon solutions for Google's TPU, contributing to products used by millions worldwide.
In this position, you'll work on SoCs specifically designed to accelerate machine learning computation in data centers. Your responsibilities will include solving technical problems with innovative logic solutions, evaluating design options with performance, power, and area considerations, and collaborating across teams including architecture, verification, power and performance, and physical design.
The role is part of the ML, Systems, & Cloud AI (MSCA) organization at Google, which is responsible for the hardware, software, machine learning, and systems infrastructure supporting all Google services and Google Cloud. The team prioritizes security, efficiency, and reliability while pushing the boundaries of hyperscale computing. Your work will directly impact Google Cloud's Vertex AI, the leading AI platform for bringing Gemini models to enterprise customers.
This is an excellent opportunity for someone with strong ASIC/SoC development experience who wants to work on next-generation data center accelerators. The role requires expertise in Verilog/SystemVerilog, micro-architecture design, and ASIC verification, with preferred experience in programming languages like Python and knowledge of processor design and high-performance computing techniques.