The Alignment team at OpenAI is dedicated to ensuring that AI systems are safe, trustworthy, and consistently aligned with human values, even as they scale in complexity and capability. As a Research Engineer on the Alignment team, you will be at the forefront of ensuring that our AI systems consistently follow human intent, even in complex and unpredictable scenarios. Your role will involve designing and implementing scalable solutions that ensure the alignment of AI as their capabilities grow and that integrate human oversight into AI decision-making.
Responsibilities include:
This role is based in San Francisco, CA, with a hybrid work model of 3 days in the office per week. OpenAI offers relocation assistance to new employees and is committed to providing reasonable accommodations to applicants with disabilities.