Logo
Alnylam Pharmaceuticals

Associate Director, Principal Data Platform Engineer / Solutions Engineer

Alnylam Pharmaceuticals, Cambridge, Massachusetts, us, 02140

Save Job

The Opportunity This role is considered [[custWorkplaceType. As Principal Platform Data Engineer, you'll be the technical cornerstone of our enterprise data platform. This is not a business‑aligned role—you’ll work horizontally across the organization, building the shared infrastructure, tooling, frameworks, and capabilities that all business domains depend on. Your scope spans the full platform stack: from cloud infrastructure and compute engines to orchestration, transformation, semantic layers, and observability.

A key objective for this role is maturing our enterprise lakehouse architecture on Snowflake and Databricks, but you’ll have the flexibility to work across any area of the platform where your skills create the most impact. Whether that’s optimizing our dbt semantic layer, building CI/CD automation, designing security patterns, or solving complex integration challenges—you’ll go where the hardest problems are.

This is a hands‑on, deep technical role for someone who wants to solve hard platform engineering problems at scale. You’ll design the architecture that enables dozens of data engineers across R&D, Clinical, Manufacturing, Commercial, and G&A to build reliable data products. You’ll implement the CI/CD pipelines, observability frameworks, security patterns, and self‑service capabilities that make the platform a force multiplier for the entire organization.

We’re looking for someone who thinks in systems, obsesses over reliability, and finds deep satisfaction in building infrastructure that other engineers love to use. You’ll have significant autonomy to shape our platform’s technical direction while collaborating closely with domain‑aligned data engineering teams that depend on your work.

What You’ll Do Enterprise Data Platform Architecture & Engineering

Own the technical architecture and engineering of our enterprise data platform spanning Snowflake, Databricks, AWS, and supporting tooling

Drive maturation of our lakehouse architecture: multi‑layer medallion patterns (raw/bronze, curated/silver, consumption/gold) with clear contracts and governance

Architect Snowflake infrastructure: account topology, warehouse sizing strategies, resource monitors, data sharing configurations, and Snowflake Cortex AI integration

Design Databricks platform architecture: Unity Catalog implementation, workspace federation, cluster policies, and Delta Lake optimization patterns

Build and maintain integration patterns between Snowflake and Databricks for unified analytics and ML workflows

Implement data mesh principles: domain ownership boundaries, data product interfaces, and federated governance within the centralized platform

Design storage architecture on AWS S3: bucket strategies, lifecycle policies, cross‑region considerations, and cost optimization

Architect streaming and real‑time data capabilities using Kafka, Kinesis, Spark Structured Streaming, or Snowpipe Streaming

Flex across platform areas based on organizational priorities—lakehouse, semantic layer, orchestration, security, or emerging needs

dbt & Semantic Layer Architecture

Own the enterprise dbt architecture: project structure, environment strategies, deployment patterns, and multi‑project orchestration

Design and implement the dbt Semantic Layer: metrics definitions, semantic models, entities, and dimensions that provide a single source of truth for business metrics

Build semantic layer integration patterns with downstream tools: BI platforms (Tableau, Power BI), notebooks, and AI/ML workflows via the Semantic Layer APIs

Develop enterprise metric definitions in collaboration with business stakeholders, ensuring consistent KPI calculations across all consumption tools

Implement MetricFlow configurations: measures, dimensions, time spines, and derived metrics that enable flexible metric queries

Design semantic model governance: naming conventions, documentation standards, versioning strategies, and change management processes

Build shared dbt packages, macros, and patterns that encode best practices and reduce duplication across domain teams

Implement dbt testing frameworks: schema tests, data tests, unit tests, and custom generic tests for data quality validation

Design dbt documentation strategies: auto‑generated docs, lineage visualization, and integration with data catalogs

Optimize dbt performance: incremental model strategies, model selection syntax, defer patterns, and efficient DAG design

Platform Tooling & Developer Experience

Own the data platform toolchain: Astronomer (Airflow), Fivetran, dbt Cloud/Core, Monte Carlo, and supporting infrastructure

Design and implement standardized project templates, cookiecutters, and scaffolding for domain teams to quickly bootstrap new data products

Implement CI/CD pipelines for data infrastructure using GitHub Actions, enabling automated testing, deployment, and promotion across environments

Design the developer experience: local development workflows, IDE configurations (VS Code dbt Power User), debugging tools, and documentation systems

Build internal CLIs and automation tools that simplify common platform operations for domain engineers

Create and maintain comprehensive platform documentation, runbooks, and training materials

Evaluate and integrate new platform tools, conducting proof‑of‑concepts and making build vs. buy recommendations

Infrastructure as Code & Automation

Own all platform infrastructure as code using Terraform, managing Snowflake, Databricks, AWS, and tooling resources

Design Terraform module architecture: reusable modules for common patterns, environment management, and state strategies

Implement GitOps workflows for infrastructure changes with proper review, testing, and rollback capabilities

Build automated provisioning for new domains, projects, and environments with self‑service capabilities where appropriate

Design secrets management and configuration patterns using AWS Secrets Manager, Parameter Store, or HashiCorp Vault

Implement infrastructure testing: policy‑as‑code with Sentinel or OPA, drift detection, and compliance validation

Data Quality & Observability Platform

Own the enterprise data observability platform built on Monte Carlo, implementing monitors, alerts, and incident workflows

Design data quality frameworks: standardized validation patterns, quality scoring, and SLA definitions that domain teams adopt

Implement end‑to‑end data lineage tracking across Snowflake, Databricks, dbt, and consumption tools

Build pipeline observability: DAG monitoring, SLA tracking, failure alerting, and performance trending in Astronomer/Airflow

Design platform metrics and dashboards: compute utilization, storage growth, query performance, and cost allocation

Implement anomaly detection for data freshness, volume, schema changes, and distribution drift

Create incident response processes: alerting escalation, on‑call rotations, runbooks, and post‑mortem frameworks

Security, Access Control & Governance

Design and implement the platform security architecture: network isolation, encryption patterns, and secure connectivity

Own role‑based access control (RBAC) implementation across Snowflake, Databricks Unity Catalog, and AWS IAM

Implement data classification and tagging frameworks that enable automated policy enforcement

Design row‑level security, column masking, and dynamic data masking patterns for sensitive data protection

Build audit logging and access monitoring capabilities for compliance and security investigations

Partner with Security and Compliance on GxP validation, 21 CFR Part 11, SOX, and GDPR requirements for the platform

Implement service account management, API key rotation, and credential lifecycle automation

Performance Engineering & Cost Optimization

Own platform performance: query optimization patterns, warehouse tuning, cluster sizing, and caching strategies

Design cost allocation and chargeback models: tagging strategies, usage attribution, and department‑level reporting

Implement cost optimization initiatives: auto‑suspend policies, storage tiering, compute right‑sizing, and reserved capacity planning

Build performance benchmarking and regression testing frameworks for critical data pipelines

Design capacity planning models and forecasting for platform growthOptimize data formats, partitioning strategies, and clustering keys for query performance at scale

Technical Leadership & Collaboration

Serve as the technical authority on data platform architecture, providing guidance to domain‑aligned data engineering teams

Lead architecture reviews and design discussions for cross‑cutting platform capabilities

Mentor and coach data engineers across the organization on platform best practices, dbt patterns, and semantic layer design

Drive technical standards, coding conventions, and engineering practices for the data platform

Collaborate with Enterprise Architecture on technology strategy, vendor relationships, and roadmap alignment

Partner with domain Directors to understand business requirements and translate them into platform capabilities

Represent platform engineering in vendor discussions with Snowflake, Databricks, dbt Labs, Astronomer, and other partners

Stay current with data platform trends, evaluating new technologies and presenting recommendations

What You’ll Bring Required Technical Expertise Data Platform Engineering

10+ years of experience in data engineering, with 5+ years focused on platform/infrastructure engineering

Expert‑level Snowflake experience: account administration, performance tuning, security configuration, data sharing, Snowpark, Streams/Tasks, Cortex AI

Expert‑level Databricks experience: Unity Catalog administration, workspace management, cluster optimization, Delta Lake internals, MLflow integration

Deep AWS expertise: S3, IAM, VPC networking, Lake Formation, Glue, Lambda, Step Functions, Secrets Manager, CloudWatch

Production experience with lakehouse architectures: medallion patterns, data mesh implementation, and multi‑tenant platform design

Strong understanding of distributed systems, data modeling at scale, and performance optimization

dbt & Semantic Layer Expertise (Required)

Deep dbt expertise (3+ years): project architecture, advanced Jinja templating, custom macros, package development, and CI/CD integration

Hands‑on experience with dbt Semantic Layer: defining metrics, semantic models, entities, dimensions, and measures using MetricFlow

Experience designing enterprise metric frameworks: KPI hierarchies, derived metrics, time‑based aggregations, and metric versioning

Knowledge of Semantic Layer integration patterns: connecting dbt metrics to BI tools, notebooks, and downstream applications via APIs

Experience with dbt Cloud features: environment management, job orchestration, CI/CD, IDE, and Semantic Layer hosting

Strong understanding of dbt best practices: model organization (staging/intermediate/marts), incremental strategies, ref() and source() patterns

Experience building reusable dbt packages: creating macros, generic tests, and materializations for organization‑wide adoption

Knowledge of dbt testing patterns: schema tests, data tests, unit tests (dbt‑unit‑testing), and integration with data observability tools

Experience optimizing dbt performance: model selection, incremental processing, defer, and state‑based comparisons

Data Platform Tooling

Expert experience with Apache Airflow/Astronomer: DAG design patterns, custom operators, plugins, and production operations

Experience with data integration tools: Fivetran, Airbyte, or similar managed ingestion platforms

Experience with data observability platforms: Monte Carlo, Datafold, Elementary, or similar tools

Knowledge of data catalog and governance tools: Collibra, Alation, Atlan, or native platform catalogs

Infrastructure & DevOps

Expert Terraform skills: module design, state management, workspace strategies, and provider development

Strong CI/CD experience: GitHub Actions, automated testing, deployment pipelines, and GitOps workflows

Container experience: Docker, and optionally Kubernetes/EKS for platform services

Scripting and automation: Python, Bash, and building internal tools/CLIs

Experience with secrets management, configuration management, and infrastructure security patterns

Monitoring and observability: Datadog, CloudWatch, PagerDuty, or similar platforms

Programming & Data Engineering

Expert‑level SQL: complex analytical queries, performance optimization, window functions, and platform‑specific SQL extensions

Expert‑level Python: production code quality, package development, testing frameworks, and async patterns

Apache Spark expertise: DataFrame APIs, Spark SQL, performance tuning, and PySpark best practices

Data modeling experience: dimensional modeling, Data Vault, normalized designs, and schema evolution strategies

Experience with streaming architectures: Kafka, Kinesis, Spark Structured Streaming, or Flink

Required Experience & Background Platform Engineering Background

Proven track record building and operating enterprise data platforms serving multiple business domains

Experience designing self‑service capabilities that empower domain teams while maintaining platform governance

History of building reusable frameworks, libraries, and tooling adopted across engineering organizations

Experience implementing semantic layers or metrics platforms that standardize business definitions across organizations

Experience with platform reliability engineering: SLAs, SLOs, incident management, and operational excellence

Track record of cost optimization initiatives with measurable financial impact

Industry Experience

5+ years in data‑intensive industries; biotech, pharmaceutical, healthcare, or life sciences experience preferred

Experience operating platforms in regulated environments (FDA, GxP, SOX, HIPAA) preferred

Understanding of data governance, compliance requirements, and audit trail needs

Technical Leadership

Experience as a technical leader, staff engineer, or principal engineer in platform/infrastructure roles

Demonstrated ability to influence engineering practices across teams without direct authority

Track record of mentoring engineers and elevating team capabilities

Experience representing engineering in vendor relationships and technology evaluations

Strong written and verbal communication for technical documentation and stakeholder engagement

Personal Attributes

Platform mindset:

You think about enabling others, not just building features—your success is measured by the productivity of teams using your platform

Systems thinker:

You see the big picture, understand dependencies, and design for emergent behavior at scale

Semantic layer advocate:

You believe in single sources of truth for metrics and invest in making business definitions consistent and accessible

Reliability obsessed:

You lose sleep over silent failures, design for resilience, and build comprehensive observability

Automation zealot:

You believe toil is a bug to be fixed and invest in tooling that eliminates repetitive work

Security‑first:

You design with security and compliance as foundational requirements, not afterthoughts

Pragmatic perfectionist:

You balance engineering excellence with delivery velocity, knowing when good enough ships value

Continuous learner:

You stay current with dbt, Snowflake, Databricks evolution and bring new ideas to the organization

Collaborative leader:

You build relationships across teams, seek input on platform decisions, and communicate changes effectively

Our Culture & Values We’re building a data organization that:

Values technical excellence:

We believe in rigorous engineering discipline and invest in doing things right

Celebrates platform thinking:

We recognize that great platforms multiply the impact of every engineer

Embraces accountability:

We own outcomes and take responsibility for platform reliability and performance

Fosters experimentation:

We try new approaches but validate rigorously before production deployment

Prioritizes collaboration:

We work as partners with domain teams, understanding their needs drive our priorities

Maintains high standards:

We balance innovation with operational stability, security, and regulatory compliance

#LI-LN1

About Alnylam Alnylam Pharmaceuticals (Nasdaq: ALNY) has led the translation of RNA interference (RNAi) into a whole new class of innovative medicines with the potential to transform the lives of people afflicted with rare and more prevalent diseases. Based on Nobel Prize‑winning science, RNAi therapeutics represent a powerful, clinically validated approach to treating diseases at their genetic source by “interfering” with mRNA that cause or contribute to disease. Since our founding in 2002, Alnylam has led the RNAi Revolution and continues to turn scientific possibility into reality.

Our culture Our people‑first culture is guided by our core values: fiercely innovative, open culture, purposeful urgency, passion for excellence, and commitment to people, and these values influence how we work and the business decisions we make. Thanks to feedback from our employees over the years, we’ve been fortunate to be named a top employer around the world. Alnylam is extremely proud to have been recognized as one of Science Magazine's Top Biopharma Employers, one of America's Most Responsible Companies for 2024 by Newsweek, a Fast Company Best Workplace for Innovators, and a Great Place to Work in Canada, France, Italy, Spain, Switzerland, and UK - among others.

At Alnylam, we commit to an inclusive recruitment process and equal employment opportunity. We are dedicated to building an environment where employees can feel that they belong, can bring their authentic selves to work, and achieve to their full potential. By empowering employees to embrace their unique differences at work, our business grows stronger with advanced and original thinking, allowing us to bring groundbreaking medicines to patients.

At Alnylam, we commit to an inclusive recruitment process and equal employment opportunity. We are dedicated to building an environment where employees can feel that they belong, can bring their authentic selves to work, and achieve to their full potential. By empowering employees to embrace their unique differences at work, our business grows stronger with advanced and original thinking, allowing us to bring groundbreaking medicines to patients.

#J-18808-Ljbffr