Research and Pathfinding Internship: AI Workload Compiler Optimization for CPU and GPU

Opens intel.wd1.myworkdayjobs.com in a new tab

About This Role

  • Join an Intel's Datacenter and AI Software Pathfinding team to advance compiler infrastructure for heterogeneous AI workloads.
  • This internship focuses on developing novel optimization techniques for AI kernel compilation targeting both CPU (Intel AMX/AVX-512) and GPU architectures from a unified representation interfacing with MLIR/LLVM framework.
  • Project Context Modern AI systems are increasingly heterogeneous: CPUs handle small models, tool execution, feature engineering, and orchestration logic, while GPUs focus on large matrix operations and attention mechanisms of larger models.
  • However, existing compiler frameworks struggle to generate optimized code for both architectures from the same high-level representation (e.g.
  • Helion/Triton DSL).
  • This internship addresses this challenge by integrating hierarchical optimization abstractions with equality saturation techniques into an MLIR-based compilation pipeline.
  • The goal is to enable automatic discovery and autotuning of high-performance fused kernels through exhaustive algebraic exploration combined with target-specific scheduling decisions.
  • What You'll Work On You will explore the design and implementation of a PEG (Graph + PEG) abstraction that combines: Algebraic optimization: E-graph-based equality saturation to systematically explore equivalent operator compositions Hierarchical scheduling: Multi-level schedule representations (loop tiling, vectorization, memory placement) focus on Xeon CPU and extend to Intel GPU target.
  • Cost-driven and constraint pruning: Resource-aware models and constraint satisfiability evaluation to eliminate infeasible schedules early MLIR integration: Leverage MLIR's retargetable backend infrastructure for multi-target code generation Verification: Prototype equivalence checking using probabilistic and symbolic methods Why This Internship? Cutting-edge applied research: Work at the intersection of research and product in a pathfinding team Real-world impact: Contribute to Intel's compiler infrastructure for heterogeneous AI systems Mentorship: Learn from experts in compilers, MLIR, and performance optimization Publication opportunities: Potential for conference/workshop publications and open-source contributions Intel ecosystem: Gain deep knowledge of Intel CPU features (AMX, AVX-512) and GPU architectures Qualifications: Minimum qualifications are required to be initially considered for this position.
  • Preferred qualifications are in addition to the minimum requirements and are considered a plus factor in identifying top candidates.
  • Required Qualifications Experience with compiler internals or programming languages (IR design, optimization passes) Programming skills: Python and C++ (MLIR/LLVM ecosystem desirable) Architecture knowledge: Familiarity with CPU (cache hierarchies, SIMD/vector instructions) Academic standing: current student of bachelor, master or PhD studies in Computer Science, Electrical Engineering, or related field Preferred Qualifications: Theoretical foundation: Understanding of algebraic rewrite systems and/or e-graphs Prior work with LLVM ecosystem, MLIR dialects or equality saturation frameworks (egg, eqsat) Experience with autotuning or cost modeling for performance optimization Knowledge of probabilistic algorithms and SMT solvers (Z3) Familiarity with tensor compiler frameworks: Mirage, Halide, TVM, Triton, or similar Publications or projects in compilers, or program synthesis Experience with workload optimization for Intel architectures (AMX, AVX-512, Sycl) Requirements listed would be obtained through a combination of industry relevant job experience, internship experiences and or schoolwork/classes/research.
  • What We Offer: At Intel, you come to work in a collaborative, supportive environment, where your equally brilliant colleagues will push you to be your best.
  • There's no fear of failure-we know that's how innovation happens.
  • And you'll never be bored.
  • We offer competitive benefits and pay, opportunities for professional development and the flexibility you need to achieve balance.
  • Intel fosters a collaborative environment allowing the brightest minds in the world to come together to achieve exceptional results.
  • Besides regular duties you can: Take advantage of various career development activities.
  • Participate in various innovation-focused activities (innovation lab, collaboration events, and patent submissions writing).
  • Have a chance to participate in Intel Great Place to Work program which gathers people who love running, cycling, squash, tennis, cross fit, photography, and many more.
  • Base salary is accompanied with such additional benefits as bonuses, private medical plan, life insurance, lunch coupons, and more.

Sourced directly from Intel’s career page

Your application goes straight to Intel.

Intel logo

Intel

Poland, Gdansk

Specialisation
Open roles at Intel
712 positions
Job ID
/job/Poland-Gdansk/Research-and-Pathfinding-Internship--AI-Workload-Compiler-Optimization-for-CPU-and-GPU_JR0283072-1

Get matched to roles like this

Upload your resume once. We’ll notify you when matching roles open up.

Join talent pool — free

Similar Other roles