Google is seeking a TPU Architect to join their innovative hardware team focused on developing custom silicon solutions for Google's direct-to-consumer products. This role combines computer architecture expertise with machine learning acceleration, requiring deep understanding of both hardware and ML concepts. The position involves working with Google's Tensor Processing Units (TPUs), specialized hardware accelerators designed for machine learning workloads.
The ideal candidate will have strong foundations in computer architecture, with experience in microarchitecture, cache systems, and memory subsystems. They will work on analyzing and optimizing ML workloads, developing tools for performance analysis, and collaborating with implementation teams to enhance TPU architecture. Knowledge of ML algorithms and compiler optimization is highly valued.
This role offers the opportunity to shape the future of Google's AI hardware infrastructure, working at the intersection of machine learning and computer architecture. The position is based in New Taipei City, Taiwan, where you'll be part of a team pushing the boundaries of what's possible in ML acceleration hardware.
As a TPU Architect, you'll contribute to Google's mission of organizing the world's information by developing the hardware that powers their AI systems. You'll work with cross-functional teams, combining expertise in hardware architecture, machine learning, and performance optimization to create more efficient and powerful AI accelerators.
The role offers the chance to work on cutting-edge technology that impacts millions of users worldwide, with the backing of Google's resources and expertise in both hardware and AI. You'll be part of a team that values innovation, technical excellence, and collaborative problem-solving.