Opens analogdevices.wd1.myworkdayjobs.com in a new tab
What You'll Do
- AI Developer Experience Help d efine and evolve the organizational vision for AI developer experience, identifying gaps and opportunities across the full AI development lifecycle and across all deployment environments.
- Lead cross-team initiatives that systematically improve how AI builders experience infrastructure—from experimentation through production—creating measurable improvements in developer productivity and satisfaction.
- Work strategically with data science teams, and research groups to understand emerging AI use cases and infrastructure needs at the frontier of the organization's work.
- Establish org-wide patterns, standards, and best practices for AI infrastructure that are adopted across multiple business units and geographies.
- Design and advocate for developer-first abstractions and platforms that absorb infrastructure complexity while enabling advanced customization for specialized use cases.
- Help evolve governance frameworks—including model versioning, experiment tracking, deployment workflows, and compliance standards—that scale with organizational growth without stifling innovation.
- Shape the organizational approach to infrastructure cost, performance, and reliability trade-offs, ensuring alignment with business objectives across teams.
- Mentor and develop engineers across the organization, creating a culture of architectural excellence and infrastructure craftsmanship.
- On-Prem, Hybrid, and Cloud AI Infrastructure Engineering Architect the organization's long-term AI infrastructure strategy spanning on-premise , hybrid, and cloud-native environments, ensuring cohesive developer experience across all.
- Define reference architectures and technical standards for compute orchestration, model serving, inference optimization, and resource management that guide platform development across teams.
- Lead the design of scalable, multi-region AI infrastructure that supports ADI's geographic expansion and business unit diversity, accounting for regulatory, latency, and cost requirements.
- Own infrastructure innovation initiatives that improve efficiency, reduce cost, and unlock new capabilities—including GPU utilization optimization, heterogeneous compute strategies (CPUs, GPUs, NPUs, FPGAs), and emerging accelerator technologies.
- Establish enterprise-level observability, governance, and security frameworks for AI infrastructure that maintain compliance across diverse environments while enabling rapid iteration.
- Drive architectural decisions on critical infrastructure dependencies (Kubernetes strategies, container runtimes, distributed compute frameworks, model serving platforms, etc.), influencing multi-year technical roadmaps.
- Partner with infrastructure and cloud teams to evolve shared platforms and services that serve AI workloads, ensuring architectural alignment across the organization.
- Anticipate infrastructure, architectural, and organizational risks—from evolving workload patterns to regulatory changes to emerging security threats—and implement durable solutions adopted org-wide.
- Lead by example in creating reusable Infrastructure-as-Code frameworks, architectural patterns, and tooling that amplify team productivity and reduce toil across the organization.
- Required Skills & Experience Recognized expert in AI infrastructure with deep knowledge of on-premises, hybrid, and cloud-native architectures, with demonstrated influence and impact across organizations or business units.
- Proven track record of architecting infrastructure systems that serve multiple, sometimes competing, organizational needs while maintaining coherence and simplicity.
- Ex pert communication and strategic thinking skills—ability to translate technical architecture into research and product impact .
- Expert-level proficiency with Kubernetes, distributed compute frameworks (Ray, Spark, or equivalent), and the ability to define org-wide orchestration and scheduling strategies.
- Mastery of Infrastructure-as-Code and GitOps frameworks, with demonstrated ability to design reusable, multi-team infrastructure patterns and platforms.
- Deep expertise in GPU and accelerator resource management, cost optimization, and performance tuning across diverse workload types and hardware configurations.
- Expert knowledge of cloud platforms (AWS, Azure, or equivalent) and proven ability to architect multi-cloud or hybrid strategies that balance flexibility, cost, and operational complexity.
- Strong background in distributed systems design, including handling scale, reliability, consistency, and failure modes across heterogeneous infrastructure.
- Demonstrated ability to lead large, cross-team initiatives from conception through execution, influencing complex decision-making and shaping long-range technical directions.
- Strong mentoring orientation with demonstrated success developing leaders and upskilling teams across the organization in infrastructure and platform topics.
- Recognized ability to drive innovation, anticipate organizational needs, and architect durable solutions that scale with the business.
- Preferred Skills – Physical Intelligence and Industrial AI Deep expertise in building or scaling AI infrastructure for robotics, autonomous systems, or industrial perception at enterprise scale, with demonstrated patterns and reusable frameworks.
- Expert-level knowledge of ROS/ROS2 ecosystems and the infrastructure challenges of deploying and managing ML models across diverse robotic platforms and environments.
- Strategic experience with edge AI deployment and the architectural tradeoffs between centralized cloud inference, edge inference, and hybrid models in physical systems.
- Background designing ML infrastructure that supports rapid adaptation, few-shot learning, and task transfer in physical systems, enabling scalable deployment across heterogeneous environments.
- Deep understanding of heterogeneous compute architectures (CPUs, GPUs, TPUs, NPUs, FPGAs) and experience optimizing inference pipelines for specialized hardware.
- Experience with real-time operating systems and the infrastructure requirements of hard real-time, safety-critical AI systems.
- Strategic familiarity with manufacturing, autonomous vehicles, or healthcare domains and the business and technical requirements that shape AI infrastructure in those industries.
- Demonstrated ability to influence robotics, manufacturing, or autonomous systems teams and shape architectural decisions that bridge domain expertise with modern AI/ML capabilities.
- Track record of translating specific domain engagements into generalizable, org-wide AI infrastructure capabilities and standards.
- For positions requiring access to technical data, Analog Devices, Inc. may have to obtain export licensing approval from the U.S.
- Department of Commerce - Bureau of Industry and Security and/or the U.S.
- Department of State - Directorate of Defense
Sourced directly from Analog Devices’s career page
Your application goes straight to Analog Devices.
Opens analogdevices.wd1.myworkdayjobs.com in a new tab
Specialisation
Open roles at Analog Devices
938 positions
Job ID
/job/US-MA-Wilmington/Principal-AI-Infrastructure-Engineer-_R262018
Get matched to roles like this
Upload your resume once. We’ll notify you when matching roles open up.
Join talent pool — freeSimilar Other roles
Samsung Semiconductor
Staff Technical Program Manager
San Jose, California, United States|Other
Samsung Semiconductor
Associate, Executive Administration
San Jose, California, United States|Other
Micron Technology
STAFF ENGINEER GFAC SASIA - ELECTRICAL
Fab 10A, Singapore|Other
Micron Technology
TEST HBM DATA ANALYST
Taichung - MTB, Taiwan|Other