TikTok is hiring: Model Policy Lead, Video Policy - Trust and Safety in San Jose
TikTok, San Jose, CA, United States, 95199
Model Policy Lead, Video Policy – Trust and Safety
1 day ago Be among the first 25 applicants
Get AI-powered advice on this job and more exclusive features.
Responsibilities
TikTok’s Trust & Safety team is seeking a Model Policy Lead for Short Video and Photo to govern how enforcement policies are implemented, maintained, and optimized across both large-scale ML classifiers and LLM-based moderation systems. You will lead a team at the center of AI-driven Trust and Safety enforcement – building Chain-of-Thought policy logic, RCA and quality pipelines, and labeling strategies that ensure our automated systems are both accurate at scale and aligned with platform standards. This role combines technical judgment, operational rigor, and policy intuition. You'll work closely with Engineering, Product and Ops teams to manage how policy is embedded in model behavior, measured through our platform quality metrics, and improved through model iterations and targeted interventions. You’ll also ensure that policy changes – often made to improve human reviewer precision – are consistently iterated across all machine enforcement pathways, maintaining unified and transparent enforcement standards. You will lead policy governance across four model enforcement streams central to TikTok’s AI moderation systems: 1) At-Scale Moderation Models (ML Classifiers) – Own policy alignment and quality monitoring for high-throughput classifiers processing hundreds of millions of videos daily. 2) At-Scale AI Moderation (LLM/CoT-Based) – Oversee CoT-based AI moderation systems handling millions of cases per day. 3) Model Change Management – Ensure consistent enforcement across human and machine systems as policies evolve. 4) Next-Bound AI Projects (SOTA Models) – Drive development of high-accuracy, LLM-based models used to benchmark and audit at-scale enforcement. Together, these streams define TikTok’s model-led enforcement infrastructure. Your role is to close the quality gap – ensuring that scale does not come at the cost of precision, and that every AI decision reflects a consistent, up-to-date, and defensible application of policy. This is a high-impact leadership role that requires strong policy intuition, data fluency, and a deep curiosity for how AI technologies shape the future of Trust and Safety.
Responsibilities (Bulleted)
- Lead a team of Policy Analysts responsible for model governance across ML classifiers and LLM-based AI moderation systems.
- Translate human moderation policies into model-readable logic – including Chain-of-Thought Decision Trees, labeling frameworks, and prompt design standards.
- Own model performance tracking through key enforcement metrics, and drive RCA cycles to identify and close quality gaps.
- Oversee policy alignment for large-scale classifiers and LLM moderation, ensuring enforcement consistency across hundreds of millions of daily content reviews.
- Build and maintain labeling systems for CoT-based AI models, including quality testing, iteration workflows, resource planning.
- Lead cross-system change management, ensuring that policy iterations are reflected consistently across human reviewers, classifiers, and AI models.
- Guide the development of next-bound SOTA models, defining policy goals, labelling requirements, and use-case applications.
- Partner with Engineering, Product, Ops, and Policy to align on enforcement strategy, rollout coordination, and long-term model enforcement and detection priorities.
Qualifications
- Experience in Trust & Safety, ML governance, moderation systems, or related policy roles.
- Experience in managing or mentoring small to medium-sized teams.
- Proven ability to lead complex programs with cross-functional stakeholders.
- Strong understanding of AI/LLM systems, including labeling pipelines and CoT-based decision logic.
- Comfort working with quality metrics and enforcement diagnostics – including FP/FN tracking, RCAs, and precision-recall tradeoffs.
- Confident self-starter with excellent judgment, able to balance multiple trade-offs to develop principled, enforceable, and defensible policies and strategies.
- Bachelor’s or master’s degree in artificial intelligence, public policy, politics, law, economics, behavioral sciences, or related fields.
Preferred Qualifications
- Experience working in a start-up, or being part of new teams in established companies.
- Experience in prompt engineering.
About TikTok
TikTok is the leading destination for short-form mobile video. At TikTok, our mission is to inspire creativity and bring joy. TikTok's global headquarters are in Los Angeles and Singapore, and we also have offices in New York City, London, Dublin, Paris, Berlin, Dubai, Jakarta, Seoul, and Tokyo.
Why Join Us
Inspiring creativity is at the core of TikTok's mission. Our innovative product is built to help people authentically express themselves, discover and connect – and our global, diverse teams make that possible. Together, we create value for our communities, inspire creativity and bring joy - a mission we work towards every day. We strive to do great things with great people. We lead with curiosity, humility, and a desire to make impact in a rapidly growing tech company. Every challenge is an opportunity to learn and innovate as one team. We’re resilient and embrace challenges as they come. By constantly iterating and fostering an “Always Day 1” mindset, we achieve meaningful breakthroughs for ourselves, our company, and our users. When we create and grow together, the possibilities are limitless.
Diversity & Inclusion
TikTok is committed to creating an inclusive space where employees are valued for their skills, experiences, and unique perspectives. Our platform connects people from across the globe and so does our workplace. At TikTok, our mission is to inspire creativity and bring joy. To achieve that goal, we are committed to celebrating our diverse voices and to creating an environment that reflects the many communities we reach. We are passionate about this and hope you are too.
Trust & Safety
TikTok recognises that keeping our platform safe for the TikTok communities is no ordinary job which can be both rewarding and psychologically demanding and emotionally taxing for some. This is why we are sharing the potential hazards, risks and implications in this unique line of work from the start, so our candidates are well informed before joining. We are committed to the wellbeing of all our employees and promise to provide comprehensive and evidence-based programs, to promote and support physical and mental wellbeing throughout each employee's journey with us. We believe that wellbeing is a relationship and that everyone has a part to play, so we work in collaboration and consultation with our employees and across our functions in order to ensure a truly person-centred, innovative and integrated approach.
Accommodation
TikTok is committed to providing reasonable accommodations in our recruitment processes for candidates with disabilities, pregnancy, sincerely held religious beliefs or other reasons protected by applicable laws. If you need assistance or a reasonable accommodation, please reach out to us at https://tinyurl.com/RA-request.
Job Information
Compensation Description (Annually): The base salary range for this position in the selected city is $147,000 - $270,000 annually. Compensation may vary outside of this range depending on a number of factors, including a candidate’s qualifications, skills, competencies and experience, and location. Base pay is one part of the Total Package that is provided to compensate and recognize employees for their work, and this role may be eligible for additional discretionary bonuses/incentives, and restricted stock units. Benefits may vary depending on the nature of employment and the country work location.