Apple Vision Pro is revolutionizing spatial computing by seamlessly blending digital content with physical space. As a Computer Vision/Machine Learning Software Engineer on the ARKit algorithms team, you'll be at the forefront of developing cutting-edge real-time visual perception algorithms for long-term localization.
The role offers a unique opportunity to work with Apple's world-leading AR platform team that released the largest platform for 2D/3D computer vision algorithms. You'll be tackling previously unsolved challenges in computer vision and machine learning, pushing the boundaries of what's possible in spatial computing.
Your work will directly impact millions of Apple customers, contributing to core real-time perception systems that extend our existing localization capabilities to more challenging scenarios. The position requires expertise in both theoretical computer vision/ML concepts and practical software engineering skills, including optimization for various hardware architectures.
The ideal candidate brings strong academic credentials in computer vision or robotics (MSc/PhD preferred) combined with significant industry experience. You'll need deep expertise in C++/Python development, real-time SLAM systems, and modern deep learning frameworks, particularly PyTorch. The role demands both technical excellence and strong collaboration skills, as you'll work closely with cross-functional teams to deliver production-quality algorithms.
Benefits include competitive base pay ($147,400-$272,100), equity through RSUs and ESPP, comprehensive healthcare, retirement benefits, and education reimbursement. Join Apple's innovative AR/VR team and help shape the future of spatial computing with Apple Vision Pro.