
Promote Project is hiring: Associate Machine Learning Engineer Trust & Safety In
Promote Project, New York, NY, United States
Associate Machine Learning Engineer Trust & Safety Intelligence
Location
New York, NY
Description
The Trust & Safety Intelligence team builds technology that keeps Spotify a safe and welcoming space for creativity. We design and scale AI systems that detect harmful or risky activity, uphold global compliance standards, and make moderation operations more efficient. The team is also expanding how we use GenAI to support policy specialists who work around the clock to protect our users from harm. Together, we’re building a strong foundation for safe and responsible innovation at Spotify. As an Associate Machine Learning Engineer, you will help develop AI and ML systems that improve how Spotify detects, reviews, and manages risky or non‑compliant activity. You’ll work with engineers, data scientists, and policy partners to design and evaluate models, automate performance tracking, and enhance transparency and auditability. This is a hands‑on, collaborative role where you’ll learn from experienced practitioners while building solutions that advance safety, compliance, and responsibility.
Please mention the word FAITHFULLY and tag RMzQuOTYuNDUuMTAy when applying to show you read the job post completely (#RMzQuOTYuNDUuMTAy). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they’re human.
#J-18808-Ljbffr
In Summary: As an Associate Machine Learning Engineer, you will help develop AI and ML systems that improve how Spotify detects, reviews, manages risky or non‑compliant activity . This is a hands‑on, collaborative role where you’ll learn from experienced practitioners while building solutions that advance safety, compliance, and responsibility .
En Español: El equipo de Inteligencia de Confianza y Seguridad construye una tecnología que mantiene a Spotify un espacio seguro y acogedor para la creatividad. Diseñamos y escalamos sistemas de IA que detectan actividad dañina o arriesgada, defienden los estándares globales de cumplimiento y hacen las operaciones de moderación más eficientes. El equipo también está ampliando cómo usamos GenAI para apoyar a especialistas en políticas que trabajan durante todo el día para proteger a nuestros usuarios del daño. Juntos, estamos construyendo una sólida base para la innovación segura y responsable en Spotify. Como ingeniero asociado de aprendizaje automático, usted desarrollará AI y ML systems que mejoran la forma en que detecta, revisa y gestiona actividades riesgosas o no compatibles con Spotify. Usted trabajará con ingenieros, científicos de datos y socios políticos para diseñar y evaluar modelos, etiquetar el seguimiento automatizado del rendimiento de seguridad y mejorar la transparencia y la practicabilidad.
Location
New York, NY
Description
The Trust & Safety Intelligence team builds technology that keeps Spotify a safe and welcoming space for creativity. We design and scale AI systems that detect harmful or risky activity, uphold global compliance standards, and make moderation operations more efficient. The team is also expanding how we use GenAI to support policy specialists who work around the clock to protect our users from harm. Together, we’re building a strong foundation for safe and responsible innovation at Spotify. As an Associate Machine Learning Engineer, you will help develop AI and ML systems that improve how Spotify detects, reviews, and manages risky or non‑compliant activity. You’ll work with engineers, data scientists, and policy partners to design and evaluate models, automate performance tracking, and enhance transparency and auditability. This is a hands‑on, collaborative role where you’ll learn from experienced practitioners while building solutions that advance safety, compliance, and responsibility.
Please mention the word FAITHFULLY and tag RMzQuOTYuNDUuMTAy when applying to show you read the job post completely (#RMzQuOTYuNDUuMTAy). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they’re human.
#J-18808-Ljbffr
In Summary: As an Associate Machine Learning Engineer, you will help develop AI and ML systems that improve how Spotify detects, reviews, manages risky or non‑compliant activity . This is a hands‑on, collaborative role where you’ll learn from experienced practitioners while building solutions that advance safety, compliance, and responsibility .
En Español: El equipo de Inteligencia de Confianza y Seguridad construye una tecnología que mantiene a Spotify un espacio seguro y acogedor para la creatividad. Diseñamos y escalamos sistemas de IA que detectan actividad dañina o arriesgada, defienden los estándares globales de cumplimiento y hacen las operaciones de moderación más eficientes. El equipo también está ampliando cómo usamos GenAI para apoyar a especialistas en políticas que trabajan durante todo el día para proteger a nuestros usuarios del daño. Juntos, estamos construyendo una sólida base para la innovación segura y responsable en Spotify. Como ingeniero asociado de aprendizaje automático, usted desarrollará AI y ML systems que mejoran la forma en que detecta, revisa y gestiona actividades riesgosas o no compatibles con Spotify. Usted trabajará con ingenieros, científicos de datos y socios políticos para diseñar y evaluar modelos, etiquetar el seguimiento automatizado del rendimiento de seguridad y mejorar la transparencia y la practicabilidad.