Google is seeking an Applied AI/ML Software Engineer for their Content Safety Platform team. This role is part of Google's Core team, which builds the technical foundation behind Google's flagship products. As a member of this team, you'll work on developing content safety classifiers to protect users and safeguard business-critical products, including GenAI-based experiences.
The position requires expertise in Machine Learning and Artificial Intelligence, combined with strong software development skills. You'll be working on large-scale systems that impact billions of users, focusing on content safety and user protection. The role involves both technical leadership and hands-on development, where you'll be responsible for designing, developing, and maintaining software solutions.
As part of Google's Core team, you'll have the unique opportunity to influence technical decisions across the company. The team serves as owners and advocates for underlying design elements, developer platforms, product components, and infrastructure at Google. You'll be working on central solutions that break down technical barriers and strengthen existing systems.
The ideal candidate will bring fresh ideas from various areas, including distributed computing, large-scale system design, artificial intelligence, and natural language processing. You'll need to be versatile, display leadership qualities, and be enthusiastic about taking on new challenges. The role also involves mentoring team members, conducting interviews, and being accountable for team deliverables.
Google offers a collaborative global environment where you'll work with teams worldwide to protect users from offensive, sensitive, or potentially harmful content while unlocking new opportunities for the business. This is an excellent opportunity for someone who wants to make a significant impact on user safety while working with cutting-edge AI/ML technologies at one of the world's leading tech companies.