Mediabistro logo
job logo

AI / Emerging Tech Security Analyst

Alignerr, Boston, MA, United States


AI / Emerging Tech Security Analyst (AI Training)
About The Role
What if your security expertise could directly shape how the world's most advanced AI systems defend against attacks, misuse, and adversarial threats? We're looking for AI Security Analysts to probe frontier AI models, identify vulnerabilities, and help ensure these systems remain safe and reliable in the real world. This is a fully remote, flexible contract role for security professionals with a sharp analytical mind and a genuine curiosity about how AI systems can be exploited—and protected. If you think like an attacker and care about getting it right, this role was built for you.

Job Details

Organization: Alignerr

Type: Hourly Contract

Location: Remote

Commitment: 10–40 hours/week

What You'll Do

Analyze AI and LLM security scenarios to understand how models behave under adversarial or unexpected conditions

Investigate prompt injection attacks, data leakage vectors, model abuse patterns, and system misuse cases

Classify security vulnerabilities and recommend appropriate mitigations based on real-world impact and likelihood

Evaluate AI system behavior against security best practices and help identify gaps before they become problems

Work with realistic threat scenarios drawn from the frontier of modern AI deployment

Complete task-based assignments independently on your own schedule

Who You Are

Solid background in cybersecurity, application security, or related disciplines

Familiar with modern threat modeling concepts and how they apply to AI and machine learning systems

Naturally curious and analytical—you enjoy pulling systems apart to understand how they break

Precise and methodical when evaluating complex systems and potential attack surfaces

Strong written communicator who can clearly document findings and reasoning

Comfortable working independently without hand-holding

Nice to Have

Hands‑on experience with LLMs, AI APIs, or machine learning pipelines

Familiarity with adversarial ML techniques—prompt injection, jailbreaking, model inversion, or similar

Background in red teaming, penetration testing, or vulnerability research

Experience with AI safety, alignment, or responsible AI development

Knowledge of emerging AI security frameworks or standards

Why Join Us

Work directly on frontier AI systems alongside leading AI research labs

Fully remote and flexible—work when and where it suits you

Freelance autonomy with the structure of meaningful, task‑based work

Contribute to a domain that sits at the cutting edge of both security and AI

Potential for ongoing work and contract extension as new projects launch

#J-18808-Ljbffr