Mediabistro logo
job logo

Database Administrator

Longbridge, Dallas, TX, United States


We are building a next-generation brokerage and trading platform for US equities markets. Our databases power order management, market data, clearing and settlement, risk controls, and customer asset systems — workloads that demand

sub-millisecond latency, strong consistency, 24/7 availability, and rigorous regulatory compliance .
We are looking for a seasoned DBA with deep expertise in financial-grade data systems to help us architect and operate world-class data infrastructure for capital markets.
What You’ll Do

Own the architecture, deployment, and day-to-day operations of database systems powering our US equities trading platform, including

PostgreSQL, MySQL, Redis, Elasticsearch , as well as time-series and OLAP engines such as

ClickHouse

and

TimescaleDB .
Design and deliver database architectures for trading, market data, and clearing/settlement paths that meet

sub-millisecond response, 24×7 availability, RPO ≈ 0, and minute-level RTO

targets.
Lead capacity planning, backup and recovery, monitoring and alerting, performance tuning, and incident response; drive root-cause analysis and post-mortems for P0/P1 incidents.
Establish and continuously evolve database standards, change-management procedures, and audit controls that align with US regulatory requirements including

SEC Rule 17a-4, FINRA, and SOX .
Design and operate

multi-region disaster recovery

architectures (e.g., US-East / US-West) to ensure business continuity under region- or AZ-level failures.
Serve as the data expert for engineering teams — reviewing schema designs, auditing and tuning SQL, leading pre-market and load-testing exercises, and gating production changes.
Drive automation across the database platform: self-service query tooling, automated release pipelines, slow-query governance, capacity forecasting, and more.
Mentor engineers and run regular training on database best practices to raise the bar across the organization.
What We’re Looking For

Deep database internals knowledge

— strong command of PostgreSQL and/or MySQL storage engines, MVCC, transaction isolation, query planners, and replication (physical, logical, semi-sync); able to diagnose non-trivial issues at the source-code or protocol level.
High availability and consistency

— hands-on experience with production-grade HA solutions (Patroni, MHA, Orchestrator, Redis Sentinel/Cluster, Elasticsearch clustering), and a clear understanding of the CAP and consistency trade-offs each one implies.
Performance engineering at scale

— proven track record operating systems with millions of QPS and TB-to-PB data volumes; fluent in diagnosing and resolving bottlenecks around locking, I/O, execution plans, connection pooling, sharding, and read/write separation.
Cloud-native databases

— strong working knowledge of AWS RDS/Aurora, GCP Cloud SQL/AlloyDB, or equivalents; understanding of their internals, cost model, and multi-AZ / multi-region deployment patterns.
Observability

— proficient with Prometheus, Grafana, pgBadger, Percona Toolkit, and similar tools; able to design end-to-end monitoring and alerting from scratch.
Communication in English

— excellent written and verbal English; able to produce clear, precise design documents, RCAs, and standards. Most internal documentation and some meetings are conducted in English.
5+ years

of DBA experience on large-scale production systems, with full ownership of architecture, deployment, and long-term evolution.
Nice to Have

Capital markets experience

— prior work at a broker-dealer, exchange, market maker, or clearing firm (US, HK, or other major markets); familiarity with common data models for orders, market data, positions, and settlement.
Compliance and security

— working knowledge of how SEC Rule 17a-4, FINRA, SOX, and PCI-DSS translate into concrete requirements around data retention, immutable audit logs, and encryption; prior experience supporting regulatory audits.
Low-latency systems

— experience tuning database paths for low-latency trading: kernel parameters, NUMA, network stack, storage selection, etc.
Engineering skills

— proficient in at least one of Python, Go, or Java; able to independently build internal tooling and data-platform components.
Systems fundamentals

— strong Linux internals, file systems, and TCP/IP knowledge; comfortable using perf, bpftrace, and eBPF for system-level analysis.
Time-series / analytics

— production experience with ClickHouse, TimescaleDB, Kdb+, or InfluxDB in market data or risk use cases.
Certifications

— Oracle OCM, AWS Database Specialty, PostgreSQL professional certifications, or equivalent.

#J-18808-Ljbffr