Job Description


When hiring, we look for candidates who can thrive in our culture of trust, feedback, and rapid growth. We believe that diversity and inclusivity are essential to our success, and we provide equal employment opportunities regardless of background or identity. Our opportunities support remote, hybrid, or onsite work at our offices in New York City, San Francisco, or Silicon Valley, and we're dedicated to creating an environment where all employees can do their best work and contribute to the growth of our platform.


  • Design, build, and maintain data pipelines from end-to-end, ensuring data accuracy, availability, and quality for the Production, Trust & Safety,  and Business Analytics internal customersCollaborate closely with Data Scientists to understand data requirements, develop data models, and optimize data pipelines for advanced analytics and machine learning use cases

  • Develop, maintain, and own scalable, efficient, and reliable ETL processes, using best practices for data ingestion, storage, and processing for business-critical warehousing use cases

  • Work with stakeholders to identify and prioritize analytics requirements, and build out necessary analytics tools and dashboards

  • Proactively monitor data pipelines, troubleshoot, and resolve data-related issues

  • Contribute to the continuous improvement of data engineering practices, including documentation, code reviews, and knowledge sharing

Desired Experience

  • 5+ years of experience in data engineering

  • Fluency in data replication and transport systems such as CDC, Debezium, Fivetran, logical replication, etc. 

  • Experience with big data technologies such as Snowflake, Hadoop, Spark, Airflow, or Flink

  • Strong knowledge of AWS services, particularly those related to data storage, processing, and analytics (e.g., S3, Redshift, Glue, EMR, Kinesis, Lambda, and Athena)

  • Expert in SQL and proficiency in at least one programming language (Kotlin, Python,  Go, Java, TypeScript)

  • Familiarity with data warehousing concepts and schema design principles (e.g., Star Schema, Snowflake Schema)

  • Strong problem-solving skills, a data-driven mindset, and a passion for working with large, complex datasets

  • Excellent communication and collaboration skills, with the ability to work effectively across teams and stakeholders

  • Familiarity with blockchain data, how it is structured, and the general semantics of EVM blockchains is very nice to have, but not required.

If you don't think you meet all of the criteria below but still are interested in the job, please apply. Nobody checks every box, and we're looking for someone who is excited to join the team.

The base salary for this full-time position, which spans across multiple internal levels depending on qualifications, ranges between $150,000 to $245,000 plus benefits & equity.

  • AWS
  • Communications Skills
  • Problem Solving
  • Python
  • SQL
  • Team Collaboration
  • TypeScript
© 2024 All right reserved.