In this role, you'll work to shape the future of AI/ML hardware acceleration. You will have an opportunity to drive cutting-edge TPU (Tensor Processing Unit) technology that powers Google's most demanding AI/ML applications. You'll be part of a diverse team that pushes boundaries, developing custom silicon solutions that power the future of Google's TPU. You'll contribute to the innovation behind products loved by millions worldwide, and leverage your design and verification expertise to verify complex digital designs, with a specific focus on TPU architecture and its integration within AI/ML-driven systems.
You will play a critical role in the bring-up and debugging of silicon and hardware platforms. You will work on the system integration and silicon validation aspects of our Cloud Tensor Processing Unit (TPU) projects. You will own the detailed design and implementation of embedded software for internal IPs and external IPs that are part of our complex machine learning System-on-a-Chip (SoC). You will develop tests and tools to a set of requirements and processes to ensure smooth and reliable performance of our AI/ML projects. You will assist in ensuring tests provide functional coverage and provide new capabilities as required. You will be working with IP specification, firmware, drivers, and real-time operating systems, and create software to operate and qualify for hardware. You will work closely with design, verification, emulation, and silicon validation teams.
Behind everything our users see online is the architecture built by the Technical Infrastructure team to keep it running. From developing and maintaining our data centers to building the next generation of Google platforms, we make Google's product portfolio possible. We're proud to be our engineers' engineers and love voiding warranties by taking things apart so we can rebuild them. We keep our networks up and running, ensuring our users have the best and fastest experience possible.