Apple's Video Computer Vision organization is seeking an Applied Research Engineer to join their team working on cutting-edge multimodal large language models (LLMs). This role sits at the intersection of research and product development, focusing on advancing AI and computer vision capabilities.
The position involves developing and exploring multimodal LLMs that integrate various types of data including text, image, video, and audio. You'll be part of a centralized applied research and engineering organization responsible for developing real-time on-device Computer Vision and Machine Perception technologies across Apple products.
Key aspects of the role include:
The role offers competitive compensation ($135,400-$250,600) plus equity opportunities through stock programs. Benefits include comprehensive medical/dental coverage, retirement benefits, education reimbursement, and potential bonuses or relocation assistance.
This is an excellent opportunity for someone passionate about AI research who wants to work on cutting-edge technology while delivering practical solutions that impact Apple's product line. You'll be part of a team that balances innovation with product delivery, working on state-of-the-art experiences that maintain Apple's high quality standards.