Meta is seeking a Security Engineer specializing in AI Security Threat Analysis and Reporting to join their team in protecting Meta and its community from AI-driven cybersecurity risks. This role is part of Meta's mission to ensure the safe adoption of Large Language Models (LLMs) by continuously measuring and mitigating their cybersecurity risks.
The position involves working on critical projects like LlamaFirewall, a foundational system designed to detect and prevent insecure LLM inputs and outputs, and CyberSecEval, a system for assessing cybersecurity risks in LLMs. The role combines technical security expertise with AI/ML knowledge to evaluate and mitigate risks across Meta's AI products and experiences.
As a Security Engineer, you'll be responsible for developing threat models, creating quarterly reports for AI leadership, and providing guidance to developers on security best practices. You'll also contribute to Meta's bug bounty program and help push the industry forward through open source contributions and conference presentations.
The ideal candidate should have at least 5 years of experience in Security Threat Detection and Investigation Engineering, strong analytical skills, and coding abilities. This position offers competitive compensation ranging from $147,000 to $208,000 per year, plus bonus and equity opportunities. Working at Meta provides exposure to cutting-edge AI security challenges and the opportunity to impact the security of AI systems used by billions of people.
Meta offers a comprehensive benefits package and promotes an inclusive work environment, making it an excellent opportunity for security professionals looking to work at the intersection of AI and cybersecurity. The role is based in Bellevue, WA, and requires collaboration with various teams across Meta to ensure the secure development and deployment of AI technologies.