Job Description

Summary

This role sits within GSR’s global Data Engineering team, where you’ll contribute to the design and development of scalable data systems that support our trading and business operations. You’ll work closely with stakeholders across the firm to build and maintain pipelines, manage data infrastructure, and ensure data is reliable, accessible, and secure.

It’s a hands-on engineering position with scope to shape the way data is handled across the business, working with modern tools in a fast-moving, high-performance environment.

Your responsibilities may include:

Data Pipeline Development

  1. Build and maintain scalable, efficient ETL/ELT pipelines for both real-time and batch processing.
  2. Integrate data from APIs, streaming platforms, and legacy systems, with a focus on data quality and reliability.

Infrastructure & Architecture

  1. Design and manage data storage solutions, including databases, warehouses, and lakes.
  2. Leverage cloud-native services and distributed processing tools (e.g., Apache Flink, AWS Batch) to support large-scale data workloads.

Operations & Tooling

  1. Monitor, troubleshoot, and optimize data pipelines to ensure performance and cost efficiency.
  2. Implement data governance, access controls, and security measures in line with best practices and regulatory standards.
  3. Develop observability and anomaly detection tools to support Tier 1 systems.

Collaboration & Continuous Improvement

  1. Work with engineers and business teams to gather requirements and translate them into technical solutions.
  2. Maintain documentation, follow coding standards, and contribute to CI/CD processes.
  3. Stay current with new technologies and help improve the team’s tooling and infrastructure.

What We’re Looking For

  1. 8+ years of experience in data engineering or a related field.
  2. Strong programming skills in Java, Python and SQL; familiarity with Rust is a plus.
  3. Proven experience designing and maintaining scalable ETL/ELT pipelines and data architectures.
  4. Hands-on expertise with cloud platforms (e.g., AWS) and cloud-native data services.
  5. Comfortable with big data tools and distributed processing frameworks such as Apache Flink or AWS Batch.
  6. Strong understanding of data governance, security, and best practices for data quality.
  7. Effective communicator with the ability to work across technical and non-technical teams.

Additional Strengths

  1. Experience with orchestration tools like Apache Airflow.
  2. Knowledge of real-time data processing and event-driven architectures.
  3. Familiarity with observability tools and anomaly detection for production systems.
  4. Exposure to data visualization platforms such as Tableau or Looker.
  5. Relevant cloud or data engineering certifications.

What we offer: 

  1. A collaborative and transparent company culture founded on Integrity, Innovation and Performance. 
  2. Competitive salary with two discretionary bonus payments a year.
  3. Benefits such as Healthcare, Dental, Vision, Retirement Planning, 30 days holiday and free lunches when in the office. 
  4. Regular Town Halls, team lunches and drinks. 
  5. A Corporate and Social Responsibility program as well as charity fundraising matching and volunteer days.  

Skills
  • AWS
  • Communications Skills
  • Data Analyst
  • Java
  • Python
  • Rust
  • SQL
© 2025 cryptojobs.com. All right reserved.