Job Description

Summary

The Data Solutions team is building the core intelligence platform used by leading public‑sector agencies and private‑sector customers to investigate threat actors, monitor risk in real time, and derive insights from blockchain and related data at scale. We run lean, and move fast; we’re looking for engineers who are hungry for ownership and responsibility, enjoy moving quickly, care deeply about understanding the mission requirements of our customers, and are excited to deliver real-world impact.

In this role, you’ll:

  1. Design and lead delivery of new platform capabilities that serve mission‑critical investigations and monitoring workflows.
  2. Operate services that ingest, transform, and serve hundreds of terabytes of data with clear SLOs for latency, freshness, and availability.
  3. Improve the scalability, performance, and cost efficiency of our data plane and APIs.
  4. Raise the quality bar across reliability, security, and compliance for both cloud and on‑premises deployments.
  5. Mentor engineers across teams and influence technical strategy beyond your immediate group.
  6. Own and evolve backend services powering customer‑facing APIs, usage/billing, alerting, and data observability.
  7. Lead team and cross-team initiatives end‑to‑end: discovery, architecture, implementation, rollout, and post‑launch learning.
  8. Architect event‑driven and streaming workflows (e.g., Kafka) with strong data contracts and schema evolution.
  9. Drive operational excellence: SLOs, runbooks, on‑call, incident reviews, and capacity plans for high‑QPS systems.
  10. Partner with product, data engineering/science, and security to translate customer requirements into durable systems.

We’re looking for candidates who have:

  1. Expert backend engineering experience building cloud‑hosted services and data pipelines on AWS or GCP (bonus: both).
  2. Deep proficiency with APIs, streaming systems, and distributed systems (e.g., microservices on Kubernetes).
  3. Demonstrated ownership of systems operating at scale (hundreds to thousands of RPS; TB–PB data volumes).
  4. High judgment on reliability, security, and cost, with a track record of measurable improvements.
  5. Ability to lead without authority—mentoring, design reviews, and cross‑org influence.

Nice to have experience:

  1. Blockchain domain knowledge (protocol fundamentals; smart contracts/Solidity).
  2. Databricks experience (Spark, Delta Lake, Delta Live Tables) or PySpark at scale.
  3. Multi‑tenant, usage tracking, and billing systems experience.
  4. On‑premises or regulated/air‑gapped deployments.

Technologies we use:

  1. Languages: Python
  2. Orchestration & Runtime: Kubernetes, Docker, Cloud Functions/Cloud Run, Terraform
  3. Streaming & Messaging: Kafka
  4. Data Platform: Spark, Delta Lake, DLT, SQL databases
  5. Cloud: GCP and AWS
  6. Edge & Networking: Cloudflare

Skills
  • AWS
  • Communications Skills
  • Development
  • Python
  • Software Engineering
  • Team Collaboration
© 2025 cryptojobs.com. All right reserved.