Back to Jobs
Cohere

AI Safety Researcher

Cohere
RemoteFull Time$200K – $280K/yr🛡️ AI Safety
AI SafetyRed TeamingAlignmentRobustnessLLM

Job Description

About the Role

Cohere is building an AI Safety Research team to ensure our enterprise models are safe, reliable, and aligned with human values at scale.

Research Areas

  • Robustness and adversarial attacks on enterprise LLMs
  • Hallucination detection and mitigation
  • Bias and fairness in domain-specific fine-tuned models
  • Red-teaming and evaluation methodologies

Why Cohere

We're at the intersection of safety research and real-world enterprise deployment, giving you unique insight into how safety challenges manifest in production.

Requirements

  • PhD in ML, AI Safety, or related field
  • Strong publication record in safety, robustness, or alignment
  • Experience with red-teaming or adversarial ML
  • Excellent communication skills

Benefits

  • Fully remote globally
  • Top-tier compensation + equity
  • Research publication support
  • Conference travel budget

Job Details

Posted
April 4, 2026
Expires
May 4, 2026
Views
743
Applies
48

About the Company

Cohere

Cohere

Toronto, Canada

Cohere provides access to advanced Large Language Models and NLP tools through one easy-to-use API.