Taro Logo

Research Engineer, AI Security & Privacy

AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity.
$295,000 - $440,000
Security
Staff Software Engineer
Hybrid
1,000 - 5,000 Employees
3+ years of experience
AI · Cybersecurity
This job posting may no longer be active. You may be interested in these related jobs instead:

Description For Research Engineer, AI Security & Privacy

The Safety Systems team at OpenAI is seeking a Research Engineer for AI Security & Privacy to pioneer methodologies and implement systems to reduce risks in AI security and privacy during model deployment. This role involves designing, implementing, and evaluating novel methods to protect AI models from threats such as data extraction and model inversion attacks. You'll collaborate with cross-functional teams, especially the Post Training team, to integrate privacy-preserving techniques into AI model development and establish security and privacy best practices for model deployment.

Key responsibilities include:

  • Researching and implementing solutions to mitigate risks associated with data poisoning, membership inference attacks, and more.
  • Leading efforts in AI security and privacy research for deep learning models.
  • Collaborating closely with various teams to improve AI security and privacy protection of OpenAI's models and systems.

The ideal candidate should:

  • Be strongly motivated by OpenAI's mission and aligned with their charter.
  • Hold a Ph.D. or other degree in computer science, AI, machine learning, or a related field.
  • Have 3+ years of experience in AI security and privacy research for deep learning models.
  • Possess an in-depth understanding of deep learning research and strong engineering skills, particularly in Python and PyTorch.
  • Stay goal-oriented and be willing to tackle high-value work when needed.
  • Be a team player who enjoys collaborative work environments.

This role is based in San Francisco, CA, with a hybrid work model of 3 days in the office per week. OpenAI offers relocation assistance to new employees and is committed to diversity, equal opportunity, and providing reasonable accommodations to applicants with disabilities.

Last updated 8 months ago

Responsibilities For Research Engineer, AI Security & Privacy

  • Design, implement, and evaluate novel methods to protect AI models from threats like data extraction and model inversion attacks
  • Collaborate with the Post Training team to integrate privacy-preserving techniques into AI model development
  • Lead efforts in researching and implementing solutions to mitigate risks associated with data poisoning, membership inference attacks, and more
  • Work closely with cross-functional teams to establish security and privacy best practices for model deployment

Requirements For Research Engineer, AI Security & Privacy

Python
  • Ph.D. or other degree in computer science, AI, machine learning, or a related field
  • 3+ years of experience in AI security and privacy research for deep learning models
  • In-depth understanding of deep learning research
  • Strong engineering skills, particularly in Python and PyTorch
  • Goal-oriented mindset
  • Team player who enjoys collaborative work environments

Benefits For Research Engineer, AI Security & Privacy

Relocation Benefits
  • Relocation Benefits

Interested in this job?