Principal Engineer – Distributed AI Systems Architecture (Heterogeneous Compute)

Opens intel.wd1.myworkdayjobs.com in a new tab

About This Role

  • We are seeking a Principal Engineer to define and architect the next generation of distributed AI systems across heterogeneous compute platforms, including CPUs, GPUs, IPUs/FNICs/FNICs, and emerging dataflow accelerators.
  • This role focuses on one of the hardest problems in modern computing: How to dynamically execute and optimize large-scale AI computation graphs across diverse hardware while managing state, locality, and performance at system scale.
  • You will operate at the intersection of systems architecture, high-performance computing, and AI infrastructure-defining the execution model, runtime abstractions, and placement strategies that turn a rack of heterogeneous devices into a coherent, programmable system.
  • Key Responsibilities 1.
  • Dynamic Execution of Distributed Computation Graphs • Define a runtime model for executing AI workloads as distributed computation graphs across heterogeneous resources • Design abstractions for graph representation, dependencies, and execution semantics • Enable dynamic scheduling and execution across CPUs, GPUs/specialized accelerators, and IPUs/FNICs., and specialized accelerators 2.
  • Stateful Scheduling and Memory-Centric Architecture • Architect systems where state (e.g., KV cache) is a first-class concern in scheduling and execution • Distributed Inferencing solution: Define models for data locality, memory hierarchy, and state ownership • Optimize for minimal data movement and efficient access to distributed state 3.
  • Graph Introspection and Automated Partitioning • Develop mechanisms to analyze AI computation graphs and classify stages by: o compute intensity o memory bandwidth requirements o communication cost o latency sensitivity • Drive automated or semi-automated partitioning of workloads across heterogeneous compute 4.
  • Integration of Specialized Accelerators • Architect frameworks that treat specialized accelerators (e.g., dataflow engines) as first-class execution targets • Define execution boundaries, data exchange models, and integration strategies across device classes • Enable interoperability across diverse compute paradigms without sacrificing performance 5.
  • MoE-Aware Execution and Adaptive Placement • Design runtime strategies for Mixture-of-Experts (MoE) models, including: o expert placement o routing locality o load balancing vs data movement trade-offs • Enhance existing frameworks for MOE and optimize communication path with IPUs/FNICs and compute path with Intel Accelerators. • Enable adaptive execution based on real-time system signals (latency, utilization, skew) 6.
  • Adaptive Runtime and Feedback-Driven Optimization • Define observability and telemetry models for distributed AI execution • Build feedback loops that continuously optimize placement, scheduling, and resource utilization • Drive system-level performance across latency, throughput, and efficiency metrics Qualifications: Minimum Qualifications: • Bachelor's or BS degree in Computer Science, Software Engineering, or a related specialized field, or equivalent experience per business needs. • 12-plus years of experience with a Bachelor's degree • Proven expertise in defining and implementing software architectures for AI frameworks, protocols, and algorithms. • Deep experience in systems architecture, high-performance computing, or distributed systems • Strong background in parallel or data-parallel computation models • Experience with heterogeneous compute environments (CPU, GPU, DSP, or accelerators) • Proven ability to design end-to-end systems from abstraction through implementation • Strong understanding of performance trade-offs across compute, memory, and interconnect Preferred Qualifications: 8-plus years of experience with a Master's degree, or 6-plus years of experience with a PhD. • Experience with AI/ML systems, inference infrastructure, or large-scale model serving • Familiarity with stream processing, dataflow models, or graph execution systems • Knowledge of modern AI frameworks or runtimes • Experience building developer-facing SDKs or programming models • Background in performance optimization and benchmarking Requirements listed would be obtained through a combination of industry relevant job experience, internship experiences and or schoolwork/classes/research. • Operate as a technical leader and architect, not just an implementer • Drive cross-team alignment across hardware, software, and infrastructure • Influence long-term system design and platform direction • Mentor engineers and shape architectural thinking across the organization Job Type: Experienced Hire Shift: Shift 1 (United States of America) Primary Location: US, California, Santa Clara Additional Locations: US, Oregon, Hillsboro, US, Texas, Austin Business group: At the Data Center Group (DCG), we're committed to delivering exceptional products and delighting our customers.
  • We offer both broad-market Xeon-based solutions and custom x86-based products, ensuring tailored innovation for diverse needs across general-purpose compute, web services, HPC, and AI-accelerated systems.
  • Our charter encompasses defining business strategy and roadmaps, product management, developing ecosystems and business opportunities, delivering strong financial performance, and reinvigorating x86 leadership.
  • Join us as we transform the data center segment through workload driven leadership products and close collaboration with our partners.
  • Position of Trust This role is a Position of Trust.
  • Should you accept this position, you must consent to and pass an extended Background Investigation, which includes (subject to country law), extended education, SEC sanctions, and additional criminal and civil checks.
  • For internals, this investigation may or may not be completed prior to starting the position.
  • For additional questions, please contact your Recruiter.
  • Benefits We offer a total compensation package that ranks among the best in the industry.
  • It consists of competitive pay, stock bonuses, and benefit programs which include health, retirement, and vacation.
  • Find out more about the benefits of working at Intel .
  • Annual Salary Range for jobs which could be performed in the US: $255,850.00-361,200.00 USD The range displayed on this job posting reflects the minimum and maximum target compensation for the position across all US locations.
  • Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training.
  • Your recruiter can share more about the specific compensation range for your preferred location during the hiring process.

Sourced directly from Intel’s career page

Your application goes straight to Intel.

Intel logo

Intel

3 Locations

Specialisation
Open roles at Intel
745 positions
Job ID
/job/US-California-Santa-Clara/Principal-Engineer---Distributed-AI-Systems-Architecture--Heterogeneous-Compute-_JR0283339

Get matched to roles like this

Upload your resume once. We’ll notify you when matching roles open up.

Join talent pool — free

Similar Other roles