Research Engineer, Alignment

AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity
$295,000 - $440,000
Hybrid
AI
This job posting may no longer be active.

Description For Research Engineer, Alignment

The Alignment team at OpenAI is dedicated to ensuring that AI systems are safe, trustworthy, and consistently aligned with human values, even as they scale in complexity and capability. As a Research Engineer on the Alignment team, you will be at the forefront of ensuring that our AI systems consistently follow human intent, even in complex and unpredictable scenarios. Your role will involve designing and implementing scalable solutions that ensure the alignment of AI as their capabilities grow and that integrate human oversight into AI decision-making.

Responsibilities include:

  • Develop and evaluate alignment capabilities that are subjective, context-dependent, and hard to measure.
  • Design evaluations to reliably measure risks and alignment with human intent and values.
  • Build tools and evaluations to study and test model robustness in different situations.
  • Design experiments to understand laws for how alignment scales as a function of compute, data, lengths of context and action, as well as resources of adversaries.
  • Design and evaluate new Human-AI-interaction paradigms and scalable oversight methods that redefine how humans interact with, understand, and supervise our models.
  • Train model to be calibrated on correctness and risk.
  • Designing novel approaches for using AI in alignment research

This role is based in San Francisco, CA, with a hybrid work model of 3 days in the office per week. OpenAI offers relocation assistance to new employees and is committed to providing reasonable accommodations to applicants with disabilities.

Last updated 7 months ago

Responsibilities For Research Engineer, Alignment

  • Develop and evaluate alignment capabilities that are subjective, context-dependent, and hard to measure
  • Design evaluations to reliably measure risks and alignment with human intent and values
  • Build tools and evaluations to study and test model robustness in different situations
  • Design experiments to understand laws for how alignment scales as a function of compute, data, lengths of context and action, as well as resources of adversaries
  • Design and evaluate new Human-AI-interaction paradigms and scalable oversight methods
  • Train model to be calibrated on correctness and risk
  • Design novel approaches for using AI in alignment research

Requirements For Research Engineer, Alignment

Python
TypeScript
  • PhD or equivalent experience in research in computer science, computational science, data science, cognitive science, or similar fields
  • Strong engineering skills, particularly in designing and optimizing large-scale machine learning systems (e.g., PyTorch)
  • Deep understanding of the science behind alignment algorithms and techniques
  • Can develop data visualization or data collection interfaces (e.g., TypeScript, Python)
  • Enjoy fast-paced, collaborative, and cutting-edge research environments
  • Want to focus on developing AI models that are trustworthy, safe, and reliable, especially in high-stakes scenarios

Benefits For Research Engineer, Alignment

Relocation Benefits
  • Relocation Benefits

Interested in this job?