Apple's Intelligent System Experiences team is seeking a Multimodal Generative Modeling Research Engineer to join their innovative AI team. This role focuses on advancing and deploying state-of-the-art generative AI technology that impacts Apple users worldwide.
The position involves working on core technology behind features like Genmoji, Image Playground, and Image Wand. From web-scale datasets to distributed training of foundation models, and from application-specific adapters to on-device optimizations, you'll be at the forefront of developing compelling user experiences through Apple Intelligence.
As a Research Engineer, you'll be responsible for architecting and deploying production-scale multimodal ML systems. The role requires expertise in applied research, solid ML fundamentals, and hands-on experience with training and adapting image/video/audio/multimodal Generative AI models.
You'll work in a collaborative environment, interacting with diverse teams including ML researchers, software engineers, and hardware & design specialists. The role offers the unique opportunity to turn cutting-edge Generative AI research into real-world applications that enhance user creativity and device interaction.
The position offers competitive compensation ($135,400 - $250,600) and comprehensive benefits including medical coverage, stock options, and educational support. This is an excellent opportunity for someone passionate about advancing the field of generative AI while creating meaningful impact at one of the world's leading technology companies.
Key projects include working on Apple's foundation models and implementing stable diffusion for CoreML on Apple Silicon, as referenced in the company's machine learning research publications. The role combines research innovation with practical implementation, requiring both technical expertise and a user-focused mindset.