Intel is seeking a Deep Learning Compiler Engineer to join their compiler team, focusing on MLIR and LLVM frameworks. This role is part of The Data Center & Artificial Intelligence Group (DCAI), which is central to Intel's transformation from a PC company to a cloud and connected computing devices leader. The position involves developing an MLIR-based compiler that drives performance improvements on Intel deep learning accelerators, directly impacting cutting-edge deep learning workloads.
The successful candidate will be responsible for designing and implementing optimizations within the MLIR and LLVM frameworks, collaborating with architecture teams, and engaging with internal customers to enhance model-level performance. This role requires strong expertise in C++ programming, compiler design, and high-performance computing, with a focus on delivering significant performance gains across Intel products.
Working at Intel offers the opportunity to be at the forefront of AI and data center technology, contributing to solutions that power cloud, communications, enterprise, and government data centers worldwide. The position demands both technical excellence and collaborative skills, as you'll be working with cross-functional teams to optimize compiler performance for deep learning applications.
The role is based in either Petah-Tikva or Haifa, Israel, requiring an on-site presence. Intel offers a comprehensive benefits package and the chance to work on transformative technology that shapes the future of computing and artificial intelligence. This is an excellent opportunity for experienced engineers passionate about compiler optimization and deep learning to make a significant impact in the field of AI acceleration.