Opens nxp.wd3.myworkdayjobs.com in a new tab
What You'll Do
- Cross Functional Collaboration & Enablement: Partner with project managers, resource managers, IT teams, data scientists, and analysts to gather requirements, define project scope, and deliver reliable, model ready datasets.
- Removing data pipeline friction and ensuring strong alignment with business objectives. • Design, build, and maintain ETL pipelines that ingest and transform data into a cloud-based R&D data lake. • Develop reusable data models and libraries that streamline common analytics and ETL use cases. • Implement and uphold data quality through validation, monitoring, and anomaly detection mechanisms. • Ensure security and compliance by embedding RBAC, lineage, auditing, and regulatory standards into pipelines. • Optimize ETL pipelines for performance, scalability, and cost efficiency. • Automate deployments and operations using CI/CD, Infrastructure as Code, and ETL Jobs. • Monitor and troubleshoot production pipelines, resolving data or platform issues and improving reliability over time.
- What you bring You can describe yourself as follows: Education & Experience • Education: Master’s degree (or equivalent practical experience) in Data Engineering, Software Engineering, Computer Science, or related technical field. • Experience: 7+ years of professional data engineering experience working with Big Data within an enterprise IT infrastructure environment. • AWS Data Lake Experience: Hands on experience with AWS-native data lake services including AWS Glue, S3, Athena and Lake Formation, covering cataloging, governance, ETL orchestration, and secure data access management. • Databricks Expertise: Extensive hands-on hands-on experience designing and implementing ETL pipelines in Databricks, including proven experience migrating existing cloud-based data lakes and ETL workflows to the Databricks platform. • Delta Lake Expertise: Strong experience working with Delta Lake, including designing Delta tables, optimizing performance and managing schema evolution and versioned data.
- Technical Skills • Cloud & Automation: Strong experience with cloud-native engineering (Azure/AWS), Infrastructure-as-Code, CI/CD, and DevOps/MLOps best practices.
- Hands-on experience with AWS CDK, Ansible and GitLab CI is a big plus. • Programming: Advanced proficiency in Python and SQL, with the ability to build robust, maintainable, and reusable code frameworks. • Data Quality & Governance: Strong command of observability, lineage, data quality frameworks, metadata management, and secure data access patterns. • Performance Optimization: Expertise in cluster/job tuning, job orchestration, storage optimization, and cost management in large data lake environments.
- Professional Attributes • Customer & Stakeholder Focus: Strong communicator who can translate technical concepts into business value and collaborate effectively across data science, architecture, and wider R&D. • Strategic Problem-Solving: Comfortable owning technical challenges and designing long-term, scalable solutions. • Team Mindset: A natural collaborator who contributes to an open, supportive working culture. • Agile Ways of Working: Familiarity with Agile and Scrum methodologies, including iterative delivery, sprint planning, backlog refinement, and cross-functional collaboration.
- More information about NXP in India... #LI-29f4
Sourced directly from NXP Semiconductors’s career page
Your application goes straight to NXP Semiconductors.
Opens nxp.wd3.myworkdayjobs.com in a new tab
Specialisation
Open roles at NXP Semiconductors
129 positions
Job ID
/job/Bangalore/IT-Data-Engineer---R-D-IT_R-10062426
Get matched to roles like this
Upload your resume once. We’ll notify you when matching roles open up.
Join talent pool — free