Mediabistro logo
job logo

Staff AI Agentic Security Engineer

Bridgewater Associates, LP, New York, NY, United States


About the Security Group
The Security Department’s mission is to protect Bridgewater. We constantly evolve our cyber, physical, and staff security practices to meet business needs and stay ahead of the changing threat landscape.

About Your Role
This person needs to know how to build and how to protect. We’re not looking for someone who reviews architectures from the sideline. We need someone in the arena — writing agents, shipping code, deploying guardrails, and setting the standard for how an entire firm adopts AI securely.

This is a 50/50 role with two equally critical mandates:

PILLAR 1 (50%) — AI Thought Leader in Security: Build It

Build Security Operations Agents:

Design, develop, and deploy autonomous agents for threat detection, alert triage, vulnerability management, and incident response — to transform the way those teams operate.

Modernize Workflows AI‑Natively:

Reimagine existing security processes through the lens of agentic AI. Replace manual runbooks with intelligent agents that reason, act, and elevate. Build agent‑powered security copilots for engineering teams that perform real‑time code review, suggest secure patterns, and catch vulnerabilities before they ship.

Own the Security AI Stack:

Evaluate, select, and implement the right mix of frameworks, orchestration tools, and infrastructure for the department’s agent platform. You should have strong opinions — backed by hands‑on experience — on LangGraph, LangChain, CrewAI, AutoGen, OpenAI Agents SDK, Google ADK, Semantic Kernel, Dify, n8n, and the broader ecosystem.

Governance and Framework Automation:

Build agents that continuously validate configurations, access policies, and data handling against regulatory and internal frameworks of the agents deployed by our investment teams.

Be the agentic security thought leader:

Be the person the department looks to for what’s possible. Stay deeply current on the AI landscape — enterprise and open‑source — and translate that knowledge into real capability.

PILLAR 2 (50%) — Forward‑Deployed AI Security Architect: Protect It

Deep Architecture & Sandboxing:

Design secure deployment architectures for AI agents across the firm. Define sandboxing strategies, execution boundaries, network isolation, and blast‑radius controls that let teams move fast without exposing the organization to unacceptable risk.

Identity & Authorization for Agents:

Architect identity strategies for a world where agents act on behalf of humans. Define how agents authenticate, what permissions they hold, how credentials are scoped and rotated, and how to enforce least‑privilege across multi‑agent systems and MCP server integrations.

AI Supply Chain Security:

Own the security posture of the AI supply chain end to end. Evaluate the security of agent frameworks, MCP servers, skills/plugins, model providers, embedding pipelines, vector databases, and every dependency in between. Understand the attack surface of tools like LangGraph, LangFlow, Dify, n8n, Open Interpreter, Claude Code, Cursor, and similar agentic development environments.

Prompt Injection & Model Manipulation Defense:

Be the firm’s leading expert on prompt injection, jailbreaking, data poisoning, indirect injection via tool outputs, and agent manipulation attacks. Design and deploy runtime defenses using tools like NeMo Guardrails, LlamaFirewall, LLM Guard, OpenGuardrails, Guardrails AI, and custom detection layers.

Runtime Safety & Governance:

Build monitoring, kill switches, escalation triggers, and anomaly detection for AI agents in production. Design human‑in‑the‑loop checkpoints calibrated to risk tolerance and action severity. Implement policy‑as‑code that governs agent behavior, tool access, data exposure, and output validation.

Secure Agent‑to‑Agent Communication:

Architect trust boundaries and communication protocols for multi‑agent systems — ensuring orchestration, tool use, and data sharing follow least‑privilege principles and are resilient to injection and manipulation.

Security Reviews & Red Teaming:

Conduct deep‑dive security architecture reviews of agentic systems before they go to production. Red‑team LLM integrations and agent workflows to find weaknesses before adversaries do.

What We Expect
You need to have a deep understanding and pulse of the AI market — both enterprise and open‑source. This space moves weekly. We need someone who’s already in it, not someone planning to catch up.

We expect this person to be fluent across the full AI stack. Not at a surface level — at the level of someone who has built with these tools, broken them, and understands their security implications from the inside.

AI Foundations & Model Layer

LLM APIs and SDKs (OpenAI, Anthropic, Google Vertex AI, Azure OpenAI, Bedrock, Mistral, Cohere) — authentication, token management, rate limiting, data handling, and model routing.

Retrieval‑Augmented Generation (RAG) pipelines end to end: embedding models, chunking strategies, vector databases (Pinecone, Weaviate, Chroma, pgvector, Qdrant), retrieval patterns, and the security implications of each.

Fine‑tuning, prompt engineering, and system prompt design — and how each creates or mitigates attack surface.

Agent Frameworks & Orchestration

Deep, hands‑on experience with modern agent frameworks: LangGraph, LangChain, CrewAI, AutoGen, OpenAI Agents SDK, Google ADK, Semantic Kernel, Pydantic AI, Strands Agents, LlamaIndex, and Agno.

Visual and low‑code agent platforms: Dify, LangFlow, Flowise, n8n (AI Agent nodes), and their security tradeoffs.

Agentic coding tools and environments: Claude Code, Cursor, Windsurf, Open Interpreter, Aider, and similar — understanding how these tools interact with codebases, filesystems, and APIs, and the risks they introduce.

Model Context Protocol (MCP): Deep understanding of MCP server architecture, tool registration, trust boundaries, and the emerging attack surface around MCP‑based integrations.

AI Security Tooling & Defense

Runtime guardrail frameworks: NVIDIA NeMo Guardrails, Meta LlamaFirewall, LLM Guard, OpenGuardrails, Guardrails AI, Rebuff, and custom detection pipelines.

AI‑specific attack vectors: prompt injection (direct and indirect), jailbreaking, data exfiltration via tool use, agent goal hijacking, training data poisoning, model inversion, and supply chain attacks on model weights and plugins.

AI governance and compliance standards: OWASP Top 10 for LLMs, NIST AI RMF, EU AI Act, ISO 42001 — and practical implementation of these frameworks.

AI red‑teaming tools and methodologies for testing agents, models, and end‑to‑end agentic workflows in adversarial conditions.

Minimum Qualifications

10+ years of experience in software engineering, security engineering or application security with demonstrated impact at a senior or staff level.

3+ years of hands‑on experience building, deploying, or securing AI/ML systems, including LLM‑based applications and agentic workflows.

Proven track record of building production‑grade AI agents or agent‑powered tools — not just evaluating or advising on them.

Deep, current knowledge of the AI agent ecosystem across enterprise and open‑source: frameworks, orchestration tools, model providers, RAG infrastructure, and developer tooling.

Demonstrated expertise in AI‑specific security threats, including prompt injection defense, agent sandboxing, identity for autonomous systems, and supply chain security for AI toolchains.

Experience securing cloud‑native applications and infrastructure (AWS, Azure, or GCP) with strong understanding of identity, networking, and data protection.

Expert in Python and/or TypeScript with the ability to build production‑grade security tooling, agents, and automation.

Proven ability to work as an embedded partner with engineering and research teams — influencing through expertise and trust, not mandates.

Exceptional communication skills: able to translate complex AI security concepts into clear, actionable guidance for engineers, researchers, and leadership.

Strong judgment in balancing security risk, business velocity, and the realities of a fast‑moving AI landscape.

Preferred Qualifications

Contributions to open‑source AI security projects or frameworks.

Background in financial services or other highly regulated industries.

Experience red‑teaming LLMs and agentic systems in adversarial settings.

Familiarity with AI observability and tracing tools (LangSmith, Langfuse, Helicone, Arize) for monitoring agent behavior in production.

Physical Requirements
This role is offered as hybrid with options to work out of our NYC or CT offices.

Compensation
The wage range for this role is $450,000 – $600,000 inclusive of base salary and discretionary target bonus. The expected base salary for this role is between 65 – 75 % of this wage range.

Equal Opportunity Employer
Bridgewater is an Equal Opportunity Employer. All employment decisions will be made without regard to race, color, creed, religion, ancestry, national origin, age, sex, marital status, civil union status, pregnancy, sexual orientation, transgender status, gender identity or expression, present or past mental disability, learning disability, physical disability, genetic information, military status, veteran status or any other characteristic protected by law.

Immigration Sponsorship
Please note that we do not provide immigration sponsorship for this position.

#J-18808-Ljbffr