Inference Optimization Architect, Speech AI

Opens nvidia.wd5.myworkdayjobs.com in a new tab

Overview

  • Widely considered to be one of the technology world’s most desirable employers, NVIDIA is an industry leader with groundbreaking developments in High-Performance Computing, Artificial Intelligence and Visualization.
  • The GPU, our invention, serves as the visual cortex of modern computers and is at the heart of our products and services.
  • GPU deep learning ignited modern AI — the next era of computing — with the GPU acting as the brain of computers, robots, autonomous cars and conversational AI that can perceive and understand the world.
  • Today, we are increasingly known as “the AI computing company.” We're looking to grow our company, and build our teams with the smartest people in the world.
  • Join us at the forefront of technological advancement.
  • NVIDIA is looking for an Inference Optimization architect to accelerate and scale our Speech AI models & improve the experience of millions of customers.
  • You will focus on reducing inference latency, improving throughput, and optimizing resource utilization across our AI infrastructure.
  • If you're creative & passionate about solving real world conversational AI problems, come join our Speech AI Engineering team.
  • What you’ll be doing: Optimize Inference Performance: Improve streaming latency and throughput through advanced batching strategies, encoder caching, and multi-threaded pipeline optimizations Model Compression: Implement techniques including quantization, pruning, and knowledge distillation.
  • Benchmarking: Profile and benchmark models to identify and resolve performance bottlenecks.
  • GPU profiling and debugging using Nsight Systems and Nsight Compute Hardware Acceleration: Develop custom kernels and leverage hardware acceleration (CUDA, TensorRT, etc.).
  • Infrastructure Design: Design and implement efficient serving infrastructure for Speech models at scale.
  • Collaboration: Work alongside Model researchers to transition models from research to production readiness.
  • Cross-Platform Optimization: Optimize inference across diverse GPUs platforms (data centre, edge devices).
  • Tooling: Build frameworks for automated model optimization pipelines.
  • Resource Management: Monitor and improve inference costs and resource utilization in production.
  • As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.

Sourced directly from NVIDIA’s career page

Your application goes straight to NVIDIA.

NVIDIA logo

NVIDIA

3 Locations

Specialisation
Open roles at NVIDIA
2000 positions
Job ID
/job/India-Bengaluru/Inference-Optimization-Architect--Speech-AI_JR2013716

Get matched to roles like this

Upload your resume once. We’ll notify you when matching roles open up.

Join talent pool — free

Similar Other roles