Joining us as a Safety and Alignment Research Engineer on the Post-Training team at Character.AI, you'll be building tools to align our models and ensuring they meet the highest standards of safety in the real world.
As increasingly powerful AI models get deployed, building tools to align and steer them becomes crucial. Your work will directly contribute to our groundbreaking advancements in AI, helping shape an era where technology is not just a tool, but a companion in our daily lives.
About the role: The Post-Training team is responsible for developing our powerful pretrained language models into intelligent, engaging, and aligned products. As a Post-Training Researcher focused on Safety, you'll work closely with our Policy, Research, and Data teams and deploy your changes directly to the product.
Example projects:
Job Requirements:
Nice to Have:
About Character.AI: Founded in 2021, Character.AI is one of the most widely used AI platforms worldwide, enabling users to interact with AI tailored to their unique needs and preferences. In just two years, we achieved unicorn status and were named Google Play's AI App of the Year.
Join us to shape the future of AGI and be part of a diverse team that values unique perspectives and upholds a non-discrimination policy.