Mediabistro logo
job logo

AI / Emerging Tech Security Analyst

Alignerr, Denver, CO, United States


AI / Emerging Tech Security Analyst (AI Training)
About The Role. What if your security expertise could directly shape how the world’s most advanced AI systems defend themselves against attack, misuse, and manipulation? We’re looking for AI Security Analysts to stress-test frontier AI models — identifying vulnerabilities, evaluating threats, and helping ensure these systems remain safe, reliable, and aligned with real‑world security standards.

This is a fully remote, flexible contract role built for security professionals who are curious about how modern AI systems can be exploited, abused, or pushed beyond their intended limits. If you’ve ever wanted to work at the intersection of cybersecurity and cutting‑edge AI, this is that opportunity.

Organization: Alignerr

Type: Hourly Contract

Location: Remote

Commitment: 10–40 hours/week

What You’ll Do

Analyze AI and LLM security scenarios to understand how models behave under adversarial, unexpected, or edge‑case conditions

Review and evaluate cases involving prompt injection, data leakage, model abuse, and system misuse

Classify security vulnerabilities and recommend appropriate mitigations based on real‑world impact and likelihood

Apply threat modeling frameworks to emerging AI technologies and deployment contexts

Help evaluate and improve AI system behavior so it stays safe, reliable, and aligned with security best practices

Work independently on task‑based assignments — fully on your own schedule

Who You Are

Background in cybersecurity, information security, or a closely related technical field

Strong understanding of modern threat modeling and how it applies to AI and LLM systems

Curious and analytical — you enjoy pulling systems apart to understand how they break

Precise and methodical when evaluating complex risk scenarios

Comfortable working independently with a high degree of autonomy

Familiarity with how AI and large language models are built, deployed, and potentially exploited

Nice to Have

Hands‑on experience with penetration testing, red teaming, or adversarial research

Familiarity with prompt injection techniques or AI‑specific attack surfaces

Background in applied security research, security engineering, or vulnerability assessment

Prior exposure to AI safety, responsible AI, or model evaluation workflows

Experience working with APIs or integrating AI tools into production environments

Why Join Us

Work directly on frontier AI systems alongside leading AI research labs

Fully remote and flexible — work when and where it suits you

Freelance autonomy with the structure of meaningful, task‑based work

Make a direct, tangible impact on the safety and security of AI at a critical moment in its development

Potential for ongoing work and contract extension as new projects launch

#J-18808-Ljbffr