Logo
Bloomberg

AI Governance & Risk Strategy Lead

Bloomberg, New York, New York, us, 10261

Save Job

Overview The energy of a newsroom, the pace of a trading floor, the buzz of a recent tech breakthrough; we work hard, and we work fast - while keeping up the quality and accuracy we're known for. It's what keeps us inventing and reinventing, all the time. Our culture is wide open, just like our spaces. We bring out the best in each other through collaboration. Through our countless volunteer projects, we also help network with the communities around us, too. You can do amazing work here. Work you couldn't do anywhere else. It's up to you to make it happen.

Bloomberg’s Chief Risk Office (CRO) plays a central role in ensuring that innovation is pursued responsibly across our global operations. As AI becomes increasingly embedded in our products and platforms, the CRO Strategy and Operations team is focused on designing robust frameworks, policies, and controls to govern AI adoption with transparency, fairness, and accountability. Our cross-functional work spans Legal, Engineering, Product, CISO, and Compliance to ensure Bloomberg’s AI systems operate safely, ethically, and in alignment with evolving regulatory standards.

What’s the role? We’re seeking an AI Governance & Risk Strategy Lead to help refine and scale our enterprise-wide AI risk program. This person will play a critical role in maturing our frameworks for responsible AI—partnering with senior stakeholders across Technology, Legal, Compliance, Data, and Product to ensure the safe, ethical, and compliant use of AI systems across Bloomberg.

Responsibilities

AI Governance & Frameworks

Enhance our enterprise AI Risk Management framework—including inventory, classification, and risk-tiering mechanisms

Develop scalable, end-to-end governance processes across the AI lifecycle: design, development, deployment, production, and retirement

Identify opportunities for automation and process improvements to strengthen controls and oversight

Cross-Functional Collaboration

Partner with Legal, Compliance, Privacy, Security, Engineering, and Product teams to address emerging AI risks and ensure effective policy implementation

Facilitate stakeholder working groups, communications, and executive updates on AI risk and governance

Monitoring & Oversight

Establish and monitor key risk indicators for AI systems (e.g., model drift, hallucination, bias)

Ensure alignment with global AI regulatory requirements (e.g., EU AI Act) and respond to regulatory inquiries or reviews

Evaluate risks tied to third-party AI solutions, including sourcing, onboarding, integration, and ongoing oversight

Enablement

Serve as an internal subject matter expert and thought leader on responsible AI use

Support AI risk training, awareness, and culture-building across the organization

You’ll Need to Have:

10+ years of experience in Technology Risk, Data and Security Risk, or AI/ML—at least 3 years directly focused on AI governance or oversight

Direct experience designing and implementing enterprise AI Risk or Responsible AI programs

Strong grasp of AI/ML technical risks (e.g., bias, explainability, model drift, robustness) and associated controls

Hands-on familiarity with generative AI tools (e.g., ChatGPT, Claude, AWS Bedrock) and their risk implications

Strong change management and stakeholder engagement skills, with a track record of influencing without authority across technical and business domains

Knowledge of data governance practices, including metadata management, data lineage, and data minimization as they pertain to AI models

Working knowledge of privacy, compliance, and regulatory frameworks (e.g., GDPR, CPRA, EU AI Act)

Excellent communication skills with experience presenting to senior stakeholders

We’d love to see:

Experience in Data Risk Management or direct collaboration with AI/ML development teams

Familiarity with AI risk management platforms or tools for model monitoring, documentation, and compliance reporting

Experience designing training, awareness, or enablement programs focused on AI risk, model governance, or responsible AI practices

Familiarity with frameworks such as NIST AI RMF, ISO/IEC 23894, or OECD AI Principles

Certifications in risk, privacy, or compliance (e.g., CIPP, CIPM, CRISC, CRCM)

Passion for AI and a desire to build a world-class risk management function

#J-18808-Ljbffr