Trust & Safety team members are tasked with identifying and taking on the biggest problems that challenge the safety and integrity of our products. They use technical know-how, excellent problem-solving skills, user insights, and proactive communication to protect users and our partners from abuse across Google products like Search, Maps, Gmail, and Google Ads.
In this role, you will help lead efforts to uncover unknown generative AI issues, novel threats and vulnerabilities that are not captured by traditional testing methods. You will drive and manage complex work to use technical tools, scripts, and automation and develop repeatable processes that can multiply CART's impact, yield valuable insights, and identify and mitigate emerging content safety risks within Google's GenAI products. Your ability to think strategically will be instrumental in shaping the future of AI development, ensuring that Google's AI products are safe, fair, and unbiased.
At Google we work hard to earn our users' trust every day. Trust & Safety is Google's team of abuse fighting and user trust experts working daily to make the internet a safer place. We partner with teams across Google to deliver bold solutions in abuse areas such as malware, spam and account hijacking. A diverse team of Analysts, Policy Specialists, Engineers, and Program Managers, we work to reduce risk and fight abuse across all of Google's products, protecting our users, advertisers, and publishers across the globe in over 40 languages.
Key Responsibilities:
This role requires a combination of technical expertise, leadership skills, and a deep understanding of AI safety and security. You'll be at the forefront of ensuring Google's AI products are safe, fair, and unbiased, making a significant impact on the future of AI development.