The Safety Systems team at OpenAI is seeking a Research Engineer for AI Security & Privacy to pioneer methodologies and implement systems to reduce risks in AI security and privacy during model deployment. This role involves designing, implementing, and evaluating novel methods to protect AI models from threats such as data extraction and model inversion attacks. You'll collaborate with cross-functional teams, especially the Post Training team, to integrate privacy-preserving techniques into AI model development and establish security and privacy best practices for model deployment.
Key responsibilities include:
The ideal candidate should:
This role is based in San Francisco, CA, with a hybrid work model of 3 days in the office per week. OpenAI offers relocation assistance to new employees and is committed to diversity, equal opportunity, and providing reasonable accommodations to applicants with disabilities.