The Center for AI Safety, a nonprofit dedicated to ensuring the safety of future AI systems, is offering a Research Engineer Intern position. This role provides a unique opportunity to work on cutting-edge AI safety research alongside experienced researchers.
As an intern, you'll be deeply involved in projects spanning various critical areas of AI safety, including Trojans, Adversarial Robustness, Power Aversion, Machine Ethics, and Out-of-Distribution Detection. You'll be treated as a colleague, with the freedom to propose and defend your own experiments and projects.
Your responsibilities will include planning and running experiments, conducting code reviews, and collaborating in small teams to produce impactful publications. You'll have access to the organization's internal compute cluster, allowing you to run large-scale experiments on advanced language models.
The ideal candidate should be comfortable with machine learning concepts, able to understand and contextualize research papers, and proficient in setting up and debugging ML experiments. Familiarity with frameworks like PyTorch is essential. Strong communication skills and the ability to take ownership of your work are also crucial.
This internship offers more than just work experience; it's an opportunity to contribute to the critical field of AI safety and potentially start a long-term collaboration with CAIS. If you're passionate about steering the future of AI in a safe and beneficial direction, this role could be your gateway into this impactful field.
The Center for AI Safety values diversity and encourages applications from candidates of all backgrounds, even if they don't meet every listed qualification. Join us in our mission to proactively address the challenges of future AI systems and help shape a safer technological future.