Google is seeking an AI Safety Data Scientist to join their Trust & Safety team, specifically working within the AI Safety Protections team. This role is crucial in developing and implementing cutting-edge AI/LLM-powered solutions to ensure the safety of generative AI across Google's products. The position involves working with sensitive content and requires expertise in data analysis, machine learning, and AI safety.
The ideal candidate will be responsible for developing scalable safety solutions for AI products, applying statistical methods to examine protection measures, and driving business outcomes through data-driven insights. They will work with teams developing cutting-edge AI technologies while focusing on protecting users from real-world harms.
This role offers the opportunity to work with the latest advancements in AI/LLM technology, collaborating with Google DeepMind and other teams on products like Gemini, Juno, and Veo. The position requires a strong background in data science, machine learning, and project management, with particular emphasis on abuse and fraud disciplines, web security, and harmful content moderation.
The role combines technical expertise with strategic thinking, requiring someone who can both develop technical solutions and communicate effectively with stakeholders at all levels. The position is based in Bengaluru, India, and offers the chance to work on meaningful projects that directly impact user safety and trust in Google's products.
Working at Google provides the opportunity to be part of a global leader in technology, with access to cutting-edge resources and the chance to work on projects that affect billions of users. The company offers a collaborative environment and is committed to diversity, equity, and inclusion.