Intel is seeking a Deep Learning Compiler Engineer to join their compiler team, focusing on MLIR and LLVM frameworks. This role is part of The Data Center & Artificial Intelligence Group (DCAI), which is central to Intel's transformation from a PC company to a cloud and connected computing devices leader. The position involves developing an MLIR-based compiler that drives performance improvements on Intel deep learning accelerators, directly impacting cutting-edge deep learning workloads.
The role combines deep technical expertise in compiler optimization with practical applications in machine learning acceleration. You'll be working on designing and implementing new optimizations within the MLIR and LLVM frameworks, collaborating with architecture teams, and engaging with internal customers to enhance model-level performance for deep learning applications.
This is an excellent opportunity for an experienced engineer with strong C++ skills and compiler expertise to make a significant impact on Intel's AI infrastructure. The position requires both technical depth in compiler design and the ability to work cross-functionally with various teams. The work directly contributes to Intel's broader mission of advancing AI computing capabilities and efficiency.
The role offers the chance to work with cutting-edge technology in the rapidly evolving field of AI acceleration, while being part of a team that's central to Intel's strategic transformation. You'll be working in either Petah-Tikva or Haifa, Israel, contributing to projects that have a global impact on AI and deep learning performance optimization.