Google's Trust & Safety team is seeking a Data Scientist to focus on AI Safety Protections. This role is crucial in protecting users and partners across Google's products from abuse and ensuring the highest levels of user safety. The position combines data science, machine learning, and security expertise to develop scalable safety solutions for AI products.
As an AI Safety Protections Data Scientist, you'll be part of a diverse team working to identify and tackle the biggest challenges in product safety and integrity. The role requires strong analytical skills to examine protection measures, uncover potential vulnerabilities, and develop data-driven solutions. You'll work with advanced ML techniques, create automated pipelines, and craft compelling data stories for stakeholders.
The ideal candidate should have at least 2 years of experience in data analysis and project management, with a strong background in programming languages like Python, SQL, and Java. Knowledge of machine learning and its application to large datasets is essential. The role involves handling sensitive content and requires someone who can maintain composure while dealing with challenging topics.
Working at Google offers the opportunity to impact billions of users while collaborating with world-class experts in a supportive environment. The Trust & Safety team operates globally across 40+ languages, making this an excellent opportunity for someone passionate about using data science to make the internet safer.
This position is based in Bengaluru, India, and offers the chance to work on cutting-edge AI safety challenges while contributing to Google's mission of organizing the world's information. The role combines technical expertise with strategic thinking, making it ideal for someone who wants to apply their data science skills to meaningful safety and security challenges.