Taro Logo

AI Safety Data Scientist

A global technology company that specializes in internet-related services and products.
Data
Mid-Level Software Engineer
In-Person
5,000+ Employees
5+ years of experience
AI · Cybersecurity

Job Description

Google is seeking an AI Safety Data Scientist to join their Trust & Safety team, specifically working within the AI Safety Protections team. This role is crucial in developing and implementing cutting-edge AI/LLM-powered solutions to ensure the safety of generative AI across Google's products. The position involves working with sensitive content and requires expertise in data analysis, machine learning, and AI safety.

The ideal candidate will be responsible for developing scalable safety solutions for AI products, applying statistical methods to examine protection measures, and driving business outcomes through data-driven insights. They will work with teams developing cutting-edge AI technologies while focusing on protecting users from real-world harms.

This role offers the opportunity to work with the latest advancements in AI/LLM technology, collaborating with Google DeepMind and other teams on products like Gemini, Juno, and Veo. The position requires a strong background in data science, machine learning, and project management, with particular emphasis on abuse and fraud disciplines, web security, and harmful content moderation.

The role combines technical expertise with strategic thinking, requiring someone who can both develop technical solutions and communicate effectively with stakeholders at all levels. The position is based in Bengaluru, India, and offers the chance to work on meaningful projects that directly impact user safety and trust in Google's products.

Working at Google provides the opportunity to be part of a global leader in technology, with access to cutting-edge resources and the chance to work on projects that affect billions of users. The company offers a collaborative environment and is committed to diversity, equity, and inclusion.

Last updated 22 days ago

Responsibilities For AI Safety Data Scientist

  • Develop scalable safety solutions for AI products across Google by leveraging advanced machine learning and AI techniques
  • Apply statistical and data science methods to examine Google's protection measures, uncover potential shortcomings, and develop actionable insights
  • Drive business outcomes by crafting compelling data stories for various stakeholders, including executive leadership
  • Develop automated data pipelines and self-service dashboards to provide timely insights at scale

Requirements For AI Safety Data Scientist

Python
  • Bachelor's degree or equivalent practical experience
  • 5 years of experience in data analysis, including identifying trends, generating summary statistics, and drawing insights from quantitative and qualitative data
  • 5 years of experience managing projects and defining project scope, goals, and deliverables
  • Experience with programming languages (Python, Julia) and SQL
  • Experience with prompt engineering and fine-tuning LLMs
  • Proficiency in applying machine learning techniques to large datasets