Mediabistro logo
job logo

Data Engineer

Q2, Austin, TX, United States


About the Role

Q2, a leading provider of digital banking and lending solutions, seeks a data engineer to build and operate data architecture for our Risk & Fraud team. You will ensure reliable, scalable data that powers fraud analytics, machine learning models, agent interfaces, and customer‑facing tools.
Key Responsibilities

Design, build, and maintain scalable data pipelines and workflows in a cloud environment
Deliver clean, well‑structured datasets to support fraud analytics, machine learning models, and agentic solutions
Contribute to improving our data architecture, including ingestion, storage, and access patterns
Own data operations by monitoring data workflows, triaging failures, and resolving data issues
Enhance observability and performance by implementing monitoring and optimizing pipelines for reliability, scalability, and cost efficiency
Partner with product managers, data scientists, and engineers to translate fraud and risk requirements into data solutions
Write maintainable code; participate in code reviews; help improve testing, deployment, and documentation standards
Production support: review pipeline executions, investigate and resolve failures
Develop and orchestrate pipelines, defining data flow, transformations, and dataset relationships
Monitor and optimize pipelines for performance and efficiency
Collaborate with teams and stakeholders to understand data requirements and ensure platform solutions meet business needs
Requirements

Typically requires a Bachelor’s degree in a relevant field and a minimum of 2 years of related experience; or an advanced degree without experience; or equivalent work experience.
Experience building and maintaining data pipelines and workflows in production environments
Proficiency in SQL and working with relational and/or analytical data stores
Experience with Python
Familiarity with data modeling, transformation, and orchestration concepts
Experience with data warehouses and distributed data processing systems
Experience with version control (e.g., Git) and CI/CD practices
Ability to troubleshoot data issues, debug pipelines, and work through ambiguous problems
Nice to Have

Experience with tools such as Apache Airflow, dbt, Kafka, Airbyte, or FiveTran
Experience with Snowflake or similar cloud data warehouses
Experience with SQL Server, PostgreSQL, or NoSQL systems like DynamoDB
Familiarity with infrastructure as code tools such as Terraform
Experience with Docker and/or Kubernetes
Exposure to platforms like Databricks, AWS Glue, AWS SageMaker, Snowpark
Benefits

Hybrid Work Opportunities
Flexible Time Off
Career Development & Mentoring Programs
Health & Wellness Benefits, including competitive health insurance offerings and generous paid parental leave for eligible new parents
Community Volunteering & Company Philanthropy Programs
Employee Peer Recognition Programs – "You Earned it"
Legal and Compliance

Applicants must be authorized to work for any employer in the U.S. We are unable to sponsor or take over sponsorship of an employment visa at this time.
We are an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, genetic information, or veteran status.
Applicants in California or Washington State may not be exempt from federal and state overtime requirements.

#J-18808-Ljbffr