AI Safety Scientist, Deep Learning

Opens nvidia.wd5.myworkdayjobs.com in a new tab

Benefits

  • everyone and contribute to a better future for all.
  • What you'll be doing: Develop datasets and moderator models for evaluating LLM models and end-to-end systems for Content Safety, ML Fairness.
  • These LLM models can be txt-to-txt or multimodal-to-txt.
  • Develop datasets for training LLM models with SFT and RL techniques, for Content Safety, ML Fairness, Security and more.
  • Research and implement cutting-edge techniques for bias detection and mitigation in LLMs and systems.
  • Define and track key metrics for responsible LLM behavior and usage.
  • Follow the best practices of automation, monitoring, scale, safety.
  • Contribute to our repositories and develop safety tools to help ML teams be more effective.
  • Collaborate with other engineers, data scientists, and researchers to develop and implement solutions to content safety and ML fairness challenges.
  • What we need to see: You have a Master’s or PhD in Computer Science, Electrical Engineering or related field - or have equivalent experience. 5+ years of work experience in developing and deploying machine learning models in production.
  • Strong understanding of machine learning principles and algorithms.
  • Hands-on programming experience in python and in-depth knowledge of machine learning frameworks, like Keras or PyTorch. 2+ years of work experience with one or more of the following broader areas for 2+ years: Content Safety, ML Fairness, AI Model Security, or related areas.
  • Experience with one or more of the following areas within Content Safety: Hate/Harassment, Sexualized, Harmful/Violent, or other specific areas from your application.
  • Experience working with large multi-lingual datasets and multi-lingual models.
  • Good at problem solving and analytical ability.
  • Excellent collaboration and communication skills.
  • Demonstrates behaviors that build trust: humility, transparency, respect, intellectual honesty.
  • Ways to stand out from the crowd: Experience with alignment/fine-tuning of LLMs - including regular LLMs as well as VLMs (Vision Language Model) or any-to-text Prior experience with multimodal and/or multilingual Content Safety, legal and regulatory compliance.
  • Experience with Hallucinations/Generative Misinformation.
  • Experience with GenAI Security including Prompt Stability, Model Extraction, Confidentiality/Data Extraction, Integrity, Availability and Adversarial Robustness.
  • Passion for AI and a demonstrated commitment to advancing the field through innovative research, prior scientific research and publication experience.
  • With highly competitive salaries and a comprehensive benefits package, Nvidia is widely considered to be one of the technology industry's most desirable employers.
  • We have some of the most forward-thinking and hardworking people in the world working with us and our engineering teams are growing fast in some of the hottest state of the art fields: Deep Learning, Artificial Intelligence, and Large Language Models.
  • If you're a creative engineer with a real passion for robust and enjoyable user experiences, we want to hear from you.
  • As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.

Sourced directly from NVIDIA’s career page

Your application goes straight to NVIDIA.

NVIDIA logo

NVIDIA

2 Locations

Specialisation
Open roles at NVIDIA
2000 positions
Job ID
/job/Vietnam-Ho-Chi-Minh-City/AI-Safety-Scientist--Deep-Learning_JR2012815

Get matched to roles like this

Upload your resume once. We’ll notify you when matching roles open up.

Join talent pool — free

Similar Other roles