Job Description

Summary

In this role, As a Software Engineer II, you will be one of the core engineers within Ripple’s central Data Engineering. This team implements the data ingestion and transformation for analytics, machine learning and powering various business functions at Ripple. You are curious about the bottlenecks and failure modes of a system and look for opportunities to continually improve cost/performance characteristics. You are hands-on in driving key technical decisions, ensuring the right tradeoffs are made to deliver high-quality results and deliver high, measurable customer value. You work well across functions and teams, including data science, product, application engineering, compliance, finance and others. Your passion for good engineering is complemented by strong instincts to deliver value.

WHAT YOU’LL DO:

  1. Highly efficient in shipping solutions to both large and small projects.
  2. Can handle ambiguity in requirements and can define and propose solutions for them.
  3. Writes, presents, and gets agreement on the design document for a project highlighting the architecture, timelines and alternatives considered.
  4. Owns the development and rollout for a small to mid-sized projects.
  5. Writes clean tech specs and identifies risks before starting major projects.
  6. Recognizes trade-offs and identifies impact/risks between alternative solutions.
  7. Improves code structure and architecture in data pipelines of testability and maintainability.
  8. Plays an active role in breaking down initiatives that span multiple sprints and tasks.
  9. Leads feature development with 1-2 collaborators.

WHAT YOU'LL BRING: 

  1. Proficient( 3 - 6 yrs) in at least one primary programming language (e.g. Python, Scala) and comfortable working with SQL
  2. Experienced in at least one Data Warehouse or data lake platforms such as Databricks
  3. Ability to write sophisticated code and comfortable with picking up new technologies independently.
  4. Enjoy helping teams push the boundaries of analytical insights, creating new product features using data, and powering machine-learning models.
  5. Familiar with developing distributed systems with experience in scalable data pipelines
  6. Familiar with data technologies like Spark or Flink and comfortable in engineering data pipelines using these technologies on financial datasets.
  7. Experience with RESTful APIs and server-side APIs integration
  8. Highly or conceptually familiar with AWS cloud resources (S3, Lambda, API Gateway, Kinesis, Athena, etc.,)
  9. Experience in orchestrating CI/CD pipelines using GitLab, Helm, and Terraform.
  10. Appreciate the importance of excellent documentation and data debugging skills.
  11. Excel at taking vague requirements and crystallizing them into scalable data solutions
  12. Excited about operating independently, demonstrating perfection, and learning new technologies and framework

Skills
  • Data Structures
  • Development
  • Python
  • Software Engineering
  • SQL
© 2025 cryptojobs.com. All right reserved.