Google's Trust & Safety team is seeking an AI Safety Data Scientist to join their AI Safety Protections team. This role focuses on developing and implementing cutting-edge AI/LLM-powered solutions to ensure the safety of generative AI across Google's products. The position involves working with sensitive content and requires expertise in data analysis, machine learning, and AI safety.
The ideal candidate will work with teams developing cutting-edge AI technologies while protecting users from real-world harms. They will be responsible for developing scalable safety solutions, analyzing protection measures, and creating data-driven insights. The role combines technical expertise in machine learning and data science with a focus on AI safety and security.
Working within Google's Trust & Safety team, you'll collaborate with engineers and product managers to identify and combat abuse and fraud cases. The position offers the opportunity to work with advanced AI technologies like Gemini, Juno, and Veo in collaboration with Google DeepMind. This is an excellent opportunity for someone passionate about AI safety and interested in applying their technical skills to protect users across Google's diverse product ecosystem.
The role requires strong analytical skills, experience with machine learning techniques, and the ability to work with large datasets. You'll be part of a team dedicated to ensuring the highest levels of user safety while working with cutting-edge AI technology.