Job Description
Summary
Your role:
- Discover, analyze and assemble large, complex data sets that meet functional / non-functional business requirements.
- Build ETL pipelines for a wide variety of data sources using (mostly) SQL and Python in an AWS environment.
- Work with stakeholders including the Executive, Product and Engineering to assist with data-related technical issues and support their data needs.
- Develop data tools for business departments that assist them in building and optimizing our product into an innovative industry leader.
- Maintain and optimize existing data models and architecture.
What makes you stand out:
- Strong Experience with relational SQL and NoSQL databases, including Postgres/Redshift, MySql/MariaDB/Aurora and MongoDB.
- Experience with Python.
- Experience with AWS cloud services: S3, Athena,Redshift, Glue.
- Experience building, optimizing and supporting ‘big data’ data pipelines, architectures and data sets.
- Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
- Build processes supporting data transformation, data structures, metadata, dependency and workload management.
- Ability to predict and validate performance with 10x, 100x, 1000x larger volumes and manage bottlenecks in advance.
- Experience supporting and working with cross-functional teams in a dynamic environment.
Skills
- AWS
- Database Management
- Development
- Python
- SQL
- Team Collaboration