Exa is revolutionizing AI applications by building a groundbreaking search engine from scratch. The company operates a sophisticated infrastructure including a $5M H200 GPU cluster and handles massive-scale web crawling operations. They're developing cutting-edge embedding models and high-performance vector databases in Rust.
The ML team focuses on training foundational models for search, with the ambitious goal of creating systems that can instantly filter and retrieve precise information from the world's knowledge, regardless of query complexity. This role is perfect for someone passionate about advancing the state of search technology and working with large-scale systems.
As an ML Research Engineer, you'll be at the forefront of developing novel transformer-based search architectures. The position involves working with massive datasets, creating sophisticated evaluation systems, and continuously pushing the boundaries of search technology. You'll be handling hundred-billion parameter models and building RLAIF pipelines for search optimization.
The role offers competitive compensation ($150K-$300K plus equity) and is based in San Francisco, with visa sponsorship available for international candidates. This is an exceptional opportunity for those who want to work on challenging problems in AI and search technology, with access to substantial computational resources and the chance to make a significant impact on how the world accesses information.
The ideal candidate should have graduate-level ML experience, strong PyTorch skills, and a deep interest in large-scale data systems. You'll be joining a team that values innovation and technical excellence, working on projects that directly influence the future of AI-powered search technology.