Opens analogdevices.wd1.myworkdayjobs.com in a new tab
What You'll Do
- AI Developer Experience Partner directly with AI builder teams (data scientists, ML engineers, researchers) as an embedded technical advisor, understanding their infrastructure bottlenecks and platform needs at the point of creation.
- Translate specific team engagements into generalizable patterns, standards, and architectural guidance that can be adopted across the organization.
- Drive initiatives that reduce friction in the AI development lifecycle—from experimentation through production—by removing operational and technical barriers for builders.
- Design and advocate for developer-friendly abstractions and APIs that hide infrastructure complexity while maintaining flexibility for advanced use cases.
- Collaborate with cross-functional stakeholders to define what "excellent developer experience" means for AI infrastructure, then measure and iterate.
- Contribute to org-wide standards for AI governance, model versioning, experiment tracking, and deployment workflows that balance flexibility with reliability.
- On-Prem, Hybrid, and Cloud AI Infrastructure Engineering Design and optimize AI infrastructure strategies that span heterogeneous environments—on- premise GPU clusters, hybrid cloud-edge deployments, and cloud-native architectures—ensuring seamless developer experience across all.
- Architect compute orchestration and scheduling solutions (Kubernetes, Ray, or equivalent) that efficiently allocate resources across multiple environments and workload types.
- Own infrastructure for model serving, inference optimization, and real-time inference pipelines supporting low-latency, edge-deployed AI models.
- Define and implement cost optimization strategies across cloud and on-prem resources, including resource allocation, auto-scaling policies, and workload consolidation.
- Build reusable Infrastructure-as-Code frameworks and tooling that other teams can adopt to provision and manage AI workloads consistently across environments.
- Establish observability and monitoring strategies for AI infrastructure, including resource utilization , cost tracking, and performance metrics that enable proactive problem-solving.
- Drive security and compliance standards for AI infrastructure, ensuring data residency, access control, and auditability across deployment environments.
- Mentor engineers on infrastructure best practices, distributed systems concepts, and optimization techniques that improve platform reliability and developer productivity Required Skills & Experience Deep expertise in designing and operating AI infrastructure across multiple deployment paradigms (on-premises, hybrid, cloud-native).
- Proven ability to work embedded with technical teams, understand complex requirements, and translate them into scalable architectural solutions.
- Strong experience with Kubernetes, container orchestration, and distributed compute frameworks (Ray, Spark, or equivalent) at production scale.
- Expert-level Infrastructure-as-Code proficiency (Terraform, CDK, or equivalent) with demonstrated ability to build reusable, multi-team infrastructure templates.
- Deep knowledge of GPU/accelerator resource management, including scheduling, optimization, and cost tracking across heterogeneous hardware.
- Experience designing model serving infrastructure and inference optimization pipelines for production AI workloads.
- Strong understanding of modern cloud platforms (AWS, Azure, or equivalent) with hands-on experience building multi-cloud or hybrid strategies.
- Demonstrated ability to solve complex infrastructure problems through systematic analysis and creative engineering.
- Strong communication skills and ability to translate technical concepts for both engineering and non-technical audiences.
- Mentoring orientation with demonstrated success upskilling engineers on infrastructure and platform topics.
- Preferred Skills – Physical Intelligence and Industrial AI Experience building or scaling AI infrastructure for robotics, autonomous systems, or industrial perception applications.
- Familiarity with ROS/ROS2 ecosystems and the infrastructure challenges of deploying ML models in robotic systems.
- Background in edge AI deployment, including optimization for low-latency inference on resource-constrained devices.
- Experience designing ML infrastructure that supports rapid iteration and few-shot adaptation in physical systems.
- Knowledge of heterogeneous compute architectures combining CPUs, GPUs, and specialized processors (NPUs, FPGAs, etc.).
- Experience with real-time operating systems or hard real-time constraints in distributed systems.
- Understanding of manufacturing, autonomous vehicle , or healthcare domains and their infrastructure requirements for AI applications.
- For positions requiring access to technical data, Analog Devices, Inc. may have to obtain export licensing approval from the U.S.
- Department of Commerce - Bureau of Industry and Security and/or the U.S.
- Department of State - Directorate of Defense Trade Controls.
- As such, applicants for this position – except US Citizens, US Permanent Residents, and protected individuals as defined by 8 U.S.C. 1324b(a)(3) – may have to go through an export licensing review process.
- We foster a culture where everyone has an opportunity to succeed regardless of their race, color, religion, age, ancestry, national origin, social or ethnic origin, sex, sexual orientation, gender, gender identity, gender expression, marital status, pregnancy, parental status, disability, medical condition, genetic information, military or veteran status, union membership, and political affiliation, or any other legally protected group.
- EEO is the Law: Notice of Applicant Rights Under the Law .
- Job Req Type: Experienced Required Travel: Yes, 10% of the time Shift Type: 1st Shift/Days The expected wage range for a new hire into this position is $144,000 to $198,000.
- Actual wage offered may vary depending on work location , experience, education, training, external market data, internal pay equity, or other bona fide factors.
- This position qualifies for a discretionary performance-based bonus which is based on personal and company factors.
- This position includes medical, vision and dental coverage, 401k, paid vacation, holidays, and sick time , and other benefits.
Sourced directly from Analog Devices’s career page
Your application goes straight to Analog Devices.
Opens analogdevices.wd1.myworkdayjobs.com in a new tab
Specialisation
Open roles at Analog Devices
938 positions
Job ID
/job/US-MA-Wilmington/Senior-AI-Infrastructure-Engineer-_R262019
Get matched to roles like this
Upload your resume once. We’ll notify you when matching roles open up.
Join talent pool — freeSimilar Other roles
Samsung Semiconductor
Staff Technical Program Manager
San Jose, California, United States|Other
Samsung Semiconductor
Associate, Executive Administration
San Jose, California, United States|Other
Micron Technology
STAFF ENGINEER GFAC SASIA - ELECTRICAL
Fab 10A, Singapore|Other
Micron Technology
TEST HBM DATA ANALYST
Taichung - MTB, Taiwan|Other