Job Description

Summary

We’re hiring a Senior Data Engineer with strong experience in AWS and Databricks to build scalable data solutions that power next-gen AI and machine learning. Join our fast-growing team to work on impactful projects, collaborate with top talent, and drive innovation at scale.

Key Responsibilities:

  1. Design, build, and manage large-scale data infrastructures using a variety of AWS technologies such as Amazon Redshift, AWS Glue, Amazon Athena, AWS Data Pipeline, Amazon Kinesis, Amazon EMR, and Amazon RDS.
  2. Design, develop, and maintain scalable data pipelines and architectures on Databricks using tools such as Delta Lake, Unity Catalog, and Apache Spark (Python or Scala), or similar technologies.
  3. Integrate Databricks with cloud platforms like AWS to ensure smooth and secure data flow across systems.
  4. Build and automate CI/CD pipelines for deploying, testing, and monitoring Databricks workflows and data jobs.
  5. Continuously optimize data workflows for performance, reliability, and security, applying Databricks best practices around data governance and quality.
  6. Ensure the performance, availability, and security of datasets across the organization, utilizing AWS’s robust suite of tools for data management.
  7. Collaborate with data scientists, software engineers, product managers, and other key stakeholders to develop data-driven solutions and models.
  8. Translate complex functional and technical requirements into detailed design proposals and implement them.
  9. Mentor junior and mid-level data engineers, fostering a culture of continuous learning and improvement within the team.
  10. Identify, troubleshoot, and resolve complex data-related issues.
  11. Champion best practices in data management, ensuring the cleanliness, integrity, and accessibility of our data.
  12. Optimize and fine-tune data queries and processes for performance. Evaluate and advise on technological components, such as software, hardware, and networking capabilities, for database management systems and infrastructure.
  13. Stay informed on the latest industry trends and technologies to ensure our data infrastructure is modern and robust.

Qualifications:

  1. 5-7 years of hands-on experience with AWS data engineering technologies, such as Amazon Redshift, AWS Glue, AWS Data Pipeline, Amazon Kinesis, Amazon RDS, and Apache Airflow.
  2. Hands-on experience working with Databricks, including Delta Lake, Apache Spark (Python or Scala), and Unity Catalog.
  3. Demonstrated proficiency in SQL and NoSQL databases, ETL tools, and data pipeline workflows.
  4. Experience with Python, and/or Java.
  5. Deep understanding of data structures, data modeling, and software architecture.
  6. Experience with AI and machine learning technologies is highly desirable.
  7. Strong problem-solving skills and attention to detail.
  8. Self-motivated and able to work independently, with excellent organizational and multitasking skills.
  9. Exceptional communication skills, with the ability to explain complex data concepts to non-technical stakeholders.
  10. Bachelor's Degree in Computer Science, Information Systems, or a related field. A Master's Degree is preferred.

Skills
  • Attention to Detail
  • AWS
  • Communications Skills
  • Database Management
  • Java
  • Problem Solving
  • SQL
  • Team Collaboration
© 2025 cryptojobs.com. All right reserved.