
Adversarial Machine Learning Engineer
C-Serv Global Ltd, Portland, OR, United States
The Opportunity
We are building a dedicated AI Red Team to rigorously test and harden enterprise-scale AI products.
We are looking for an adversarial machine learning specialist who thinks like an attacker.
This role focuses on identifying vulnerabilities in LLM-driven systems, breaking model guardrails, exploiting data pathways, and stress-testing AI deployments before they reach enterprise customers.
This is a hands‑on technical role at the core of AI security.
What You’ll Do
Conduct adversarial testing across LLM and AI-based systems
Execute real‑world attack simulations, including:
Prompt injection
Jailbreaking and guardrail bypass
Data exfiltration attempts
Model inversion and evasion techniques
RAG manipulation
Develop scripts and tooling to automate attack scenarios
Analyse model behaviour under adversarial pressure
Identify systemic vulnerabilities in:
APIs
Embedding pipelines
Vector databases
Fine‑tuned model implementations
Collaborate with engineering teams to validate remediation
Document findings clearly and concisely
You will help ensure AI systems are resilient before they are deployed at scale.
What We’re Looking For
Core Technical Skills
Strong experience in adversarial ML or AI security research
Experience working with LLM-based systems (OpenAI, Anthropic, open‑source models, etc.)
Deep understanding of:
Prompt injection techniques
Model jailbreak methodologies
AI system exploitation vectors
Strong Python skills
Experience building custom attack tooling or experimentation frameworks
AI Systems Knowledge
Familiarity with:
RAG architectures
Vector databases
Model fine‑tuning workflows
API‑based model deployments
Understanding of model safety mechanisms and guardrails
Nice to Have
Background in cybersecurity or penetration testing
Familiarity with OWASP LLM Top 10
Experience working in enterprise environments
Who You Are
Curious and relentless
Comfortable thinking like an attacker
Creative in finding non‑obvious vulnerabilities
Detail‑oriented but fast‑moving
Comfortable operating in ambiguity
Independent but collaborative
You don’t just run test cases — you design new ones.
Comprehensive Private Medical Coverage
Support for Mental Health Expenses
Life Insurance Options
Attractive Compensation Package
#J-18808-Ljbffr
We are building a dedicated AI Red Team to rigorously test and harden enterprise-scale AI products.
We are looking for an adversarial machine learning specialist who thinks like an attacker.
This role focuses on identifying vulnerabilities in LLM-driven systems, breaking model guardrails, exploiting data pathways, and stress-testing AI deployments before they reach enterprise customers.
This is a hands‑on technical role at the core of AI security.
What You’ll Do
Conduct adversarial testing across LLM and AI-based systems
Execute real‑world attack simulations, including:
Prompt injection
Jailbreaking and guardrail bypass
Data exfiltration attempts
Model inversion and evasion techniques
RAG manipulation
Develop scripts and tooling to automate attack scenarios
Analyse model behaviour under adversarial pressure
Identify systemic vulnerabilities in:
APIs
Embedding pipelines
Vector databases
Fine‑tuned model implementations
Collaborate with engineering teams to validate remediation
Document findings clearly and concisely
You will help ensure AI systems are resilient before they are deployed at scale.
What We’re Looking For
Core Technical Skills
Strong experience in adversarial ML or AI security research
Experience working with LLM-based systems (OpenAI, Anthropic, open‑source models, etc.)
Deep understanding of:
Prompt injection techniques
Model jailbreak methodologies
AI system exploitation vectors
Strong Python skills
Experience building custom attack tooling or experimentation frameworks
AI Systems Knowledge
Familiarity with:
RAG architectures
Vector databases
Model fine‑tuning workflows
API‑based model deployments
Understanding of model safety mechanisms and guardrails
Nice to Have
Background in cybersecurity or penetration testing
Familiarity with OWASP LLM Top 10
Experience working in enterprise environments
Who You Are
Curious and relentless
Comfortable thinking like an attacker
Creative in finding non‑obvious vulnerabilities
Detail‑oriented but fast‑moving
Comfortable operating in ambiguity
Independent but collaborative
You don’t just run test cases — you design new ones.
Comprehensive Private Medical Coverage
Support for Mental Health Expenses
Life Insurance Options
Attractive Compensation Package
#J-18808-Ljbffr