Dynamo AI is at the forefront of developing safe and responsible Large Language Models (LLMs) with a focus on privacy and real-world applications. As an ML Research Engineer specializing in LLM Safety, you'll join a dynamic team of ML Ph.D.s and builders working on the premier platform for private and personalized LLMs. The role offers a unique opportunity to impact Fortune 500 companies' AI adoption while working at a CB Insights Top 100 AI Startup.
You'll be responsible for owning an LLM vertical focused on safety, conducting cutting-edge research, and developing novel techniques that make LLMs more harmless and helpful. The position combines academic research with practical industry applications, allowing you to see your impact within weeks rather than years. You'll work on generating synthetic data, training models, and ensuring robust production code implementation.
The ideal candidate should have deep expertise in LLM safety techniques, extensive experience with various model architectures, and the ability to adapt quickly to new research developments. You'll collaborate with a research team to co-author papers and patents while working in an environment free from traditional Big Tech bureaucracy.
This role offers the opportunity to directly influence the responsible development of AI technology, working with a team that prioritizes safety, privacy, and ethical considerations. You'll be part of a mission to democratize AI advancements responsibly, ensuring that technological progress doesn't come at the cost of user privacy or safety.
Join Dynamo AI if you're passionate about making AI safer, more private, and more responsible while working in a fast-paced environment where your research and development work directly impacts real-world applications.