
About the Role
Anthropic is an AI safety and research company focusing on reliable, interpretable, and steerable AI systems. As a Safeguards Analyst focusing on Account Abuse, you will help build and scale detection, enforcement, and operational capabilities that protect the platform against scaled abuse. Important Context:
In this position, you may be exposed to explicit content spanning a range of topics, including sexual, violent, or psychologically disturbing material. There is also an on-call responsibility across the Policy and Enforcement teams. Responsibilities
Develop and iterate on account signals and prevention frameworks that consolidate internal and external data into actionable abuse indicators Develop and optimize identity and account-linking signals using graph-based data infrastructure to detect coordinated and scaled account abuse Evaluate, integrate, and operationalize third-party vendor signals — assessing whether new data sources provide genuine lift in detection Expand internal account signals with new data sources and behavioral indicators to improve detection coverage Build and maintain processes that evaluate new product launches for scaled abuse risks, working closely with product teams to ensure enforcement readiness Operationalize and iterate on enforcement tooling — including appeals workflows, review processes, and user communications — to maintain quality and scale with growing volume Analyze enforcement performance through operational metrics, partnering with the team to keep detection accurate as abuse patterns evolve Manage payment fraud and dispute operations to protect revenue and maintain our standing with payment partners Coordinate enforcement efforts for policy compliance gaps across products, working with relevant teams to build scalable review processes Collaborate with cross-functional teams (Engineering, Product, Legal, Data Science) to surface new signals and translate detection capabilities into enforcement workflows Maintain detailed documentation of signal development, enforcement processes, and operational decisions Qualifications
2+ years of experience in risk scoring, fraud detection, trust and safety, or policy enforcement Hands-on experience building detection systems, risk models, or enforcement processes and workflows Experience evaluating and integrating third-party data sources into detection or scoring pipelines Strong SQL and Python skills — this role involves heavy data analysis across complex, multi-table data relationships Familiarity with identity signals such as device fingerprinting, account linking, or entity resolution, or experience with appeals processes and customer-facing enforcement communications Demonstrated ability to analyze complex data problems and translate findings into actionable improvements Strong written and verbal communication skills — ability to explain technical tradeoffs and navigate cross-functional stakeholder conversations Equivalent practical experience or a Bachelor's degree in Computer Science, Data Science, or related field You might be a good fit if you
Have built risk scores, detection systems, signal pipelines, or enforcement processes in a previous role — identity verification, trust and safety, or similar Are comfortable working with ambiguous, noisy data and extracting meaningful signal Think critically about signal quality and enforcement performance — evaluating whether new detection signals or processes meaningfully improve outcomes Have experience with graph-based data, account-linking problems, or cross-functional process design Are proactive about identifying gaps in existing detection or enforcement and proposing new approaches Have experience leveraging generative AI tools to support analytical, detection, or enforcement workflows Can balance deep analytical work with cross-functional collaboration and stakeholder coordination Have a background or interest in cybersecurity or threat intelligence (a plus, not a requirement) The annual compensation range for this role is listed below. $230,000 - $310,000 USD Logistics
Education requirements:
We require at least a Bachelor\'s degree in a related field or equivalent experience. Location-based hybrid policy:
Currently, we expect all staff to be in one of our offices at least 25% of the time. Some roles may require more time in our offices. Visa sponsorship:
We do sponsor visas. If an offer is made, we will make reasonable efforts to obtain a visa, assisted by an immigration lawyer. We encourage you to apply even if you do not meet every qualification.
Not all strong candidates will meet every qualification. We value diverse perspectives and encourage applications from underrepresented groups. Your safety matters to us. To protect yourself from potential scams, beware of emails from non-anthropic domains. Legitimate recruiters will not ask for money or banking information before your first day. Visit
anthropic.com/careers
for confirmed openings. How we\'re different
We believe the highest-impact AI research is big science, conducted by a collaborative team pursuing steerable, trustworthy AI. We value communication and cross-functional collaboration and share ongoing research directions through our discussions and published work. Come work with us!
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible hours, and an office space for collaboration. For more information about AI usage in our application process, see our policy.
#J-18808-Ljbffr
Anthropic is an AI safety and research company focusing on reliable, interpretable, and steerable AI systems. As a Safeguards Analyst focusing on Account Abuse, you will help build and scale detection, enforcement, and operational capabilities that protect the platform against scaled abuse. Important Context:
In this position, you may be exposed to explicit content spanning a range of topics, including sexual, violent, or psychologically disturbing material. There is also an on-call responsibility across the Policy and Enforcement teams. Responsibilities
Develop and iterate on account signals and prevention frameworks that consolidate internal and external data into actionable abuse indicators Develop and optimize identity and account-linking signals using graph-based data infrastructure to detect coordinated and scaled account abuse Evaluate, integrate, and operationalize third-party vendor signals — assessing whether new data sources provide genuine lift in detection Expand internal account signals with new data sources and behavioral indicators to improve detection coverage Build and maintain processes that evaluate new product launches for scaled abuse risks, working closely with product teams to ensure enforcement readiness Operationalize and iterate on enforcement tooling — including appeals workflows, review processes, and user communications — to maintain quality and scale with growing volume Analyze enforcement performance through operational metrics, partnering with the team to keep detection accurate as abuse patterns evolve Manage payment fraud and dispute operations to protect revenue and maintain our standing with payment partners Coordinate enforcement efforts for policy compliance gaps across products, working with relevant teams to build scalable review processes Collaborate with cross-functional teams (Engineering, Product, Legal, Data Science) to surface new signals and translate detection capabilities into enforcement workflows Maintain detailed documentation of signal development, enforcement processes, and operational decisions Qualifications
2+ years of experience in risk scoring, fraud detection, trust and safety, or policy enforcement Hands-on experience building detection systems, risk models, or enforcement processes and workflows Experience evaluating and integrating third-party data sources into detection or scoring pipelines Strong SQL and Python skills — this role involves heavy data analysis across complex, multi-table data relationships Familiarity with identity signals such as device fingerprinting, account linking, or entity resolution, or experience with appeals processes and customer-facing enforcement communications Demonstrated ability to analyze complex data problems and translate findings into actionable improvements Strong written and verbal communication skills — ability to explain technical tradeoffs and navigate cross-functional stakeholder conversations Equivalent practical experience or a Bachelor's degree in Computer Science, Data Science, or related field You might be a good fit if you
Have built risk scores, detection systems, signal pipelines, or enforcement processes in a previous role — identity verification, trust and safety, or similar Are comfortable working with ambiguous, noisy data and extracting meaningful signal Think critically about signal quality and enforcement performance — evaluating whether new detection signals or processes meaningfully improve outcomes Have experience with graph-based data, account-linking problems, or cross-functional process design Are proactive about identifying gaps in existing detection or enforcement and proposing new approaches Have experience leveraging generative AI tools to support analytical, detection, or enforcement workflows Can balance deep analytical work with cross-functional collaboration and stakeholder coordination Have a background or interest in cybersecurity or threat intelligence (a plus, not a requirement) The annual compensation range for this role is listed below. $230,000 - $310,000 USD Logistics
Education requirements:
We require at least a Bachelor\'s degree in a related field or equivalent experience. Location-based hybrid policy:
Currently, we expect all staff to be in one of our offices at least 25% of the time. Some roles may require more time in our offices. Visa sponsorship:
We do sponsor visas. If an offer is made, we will make reasonable efforts to obtain a visa, assisted by an immigration lawyer. We encourage you to apply even if you do not meet every qualification.
Not all strong candidates will meet every qualification. We value diverse perspectives and encourage applications from underrepresented groups. Your safety matters to us. To protect yourself from potential scams, beware of emails from non-anthropic domains. Legitimate recruiters will not ask for money or banking information before your first day. Visit
anthropic.com/careers
for confirmed openings. How we\'re different
We believe the highest-impact AI research is big science, conducted by a collaborative team pursuing steerable, trustworthy AI. We value communication and cross-functional collaboration and share ongoing research directions through our discussions and published work. Come work with us!
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible hours, and an office space for collaboration. For more information about AI usage in our application process, see our policy.
#J-18808-Ljbffr