Job Description

Summary

Your role:

  1. Discover, analyze and assemble large, complex data sets that meet functional / non-functional business requirements.
  2. Build ETL pipelines for a wide variety of data sources using (mostly) SQL and Python in an AWS environment.
  3. Work with stakeholders including the Executive, Product and Engineering to assist with data-related technical issues and support their data needs.
  4. Develop data tools for business departments that assist them in building and optimizing our product into an innovative industry leader.
  5. Maintain and optimize existing data models and architecture.

What makes you stand out:

  1. Strong Experience with relational SQL and NoSQL databases, including Postgres/Redshift, MySql/MariaDB/Aurora and MongoDB.
  2. Experience with Python.
  3. Experience with AWS cloud services: S3, Athena,Redshift, Glue.
  4. Experience building, optimizing and supporting ‘big data’ data pipelines, architectures and data sets.
  5. Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
  6. Build processes supporting data transformation, data structures, metadata, dependency and workload management.
  7. Ability to predict and validate performance with 10x, 100x, 1000x larger volumes and manage bottlenecks in advance.
  8. Experience supporting and working with cross-functional teams in a dynamic environment.

Skills
  • AWS
  • Database Management
  • Development
  • Python
  • SQL
  • Team Collaboration
© 2025 cryptojobs.com. All right reserved.