Job Description
Summary
Mission
Design, build, and own a working prototype that detects emerging threats to stablecoin peg stability, forecasts depeg risk, ingests and interprets compliance/regulatory context, fuses heterogeneous signals into a composite risk index (SPRI), and surfaces explainable, prioritised alerts to risk and compliance stakeholders. This is a hands-on role combining system architecture, data/model engineering, and operational decision-support delivery.
Key Responsibilities
- Define and own the end-to-end PoC architecture: signal ingestion, feature engineering, anomaly detection, depeg probability forecasting, NLP compliance/context pipeline, signal fusion, explainability, alerting, and briefing generation.
- Translate business risk objectives (peg integrity, regulatory drift, depegging probability) into concrete data contracts, JSON schema definitions, model interfaces, and alert logic.
- Build and operate real-time and batch pipelines ingesting heterogeneous data: market/peg dynamics, order book/depth, on-chain flows (holder entropy, net imbalances, graph structure), reserve/backing health, regulatory text, and external sentiment signals.
- Develop and tune AI models:
- Unsupervised/hybrid anomaly detection for behavioural irregularities.
- Supervised short-horizon depeg probability forecasting with calibration.
- NLP components for topic classification, semantic drift/change detection in policy text, and sentiment/concern extraction.
- Fuse signals into a composite risk index (SPRI) and design prioritisation logic that amplifies correlated stress (e.g., behavioral anomaly + regulatory shock).
- Implement explainability (feature attribution, templated rationales) so alerts are transparent and actionable for compliance/risk analysts.
- Build or shepherd a lightweight triage interface (dashboard or API) exposing alerts, scores, explanations, and recommended next steps.
- Create and curate ground truth via historical and synthetic stress/depeg scenarios; drive evaluation metrics (detection lead time, calibration, alert lift).
- Embed a human-in-the-loop feedback loop: capture analyst annotations (accept/reject, priority adjustment) and iteratively refine models, fusion weights, and thresholds.
- Prepare and deliver stakeholder-ready incident walkthroughs and briefing summaries; own the final PoC report with quantitative findings and productionisation recommendations.
- Ensure secure, auditable data handling, and that AI outputs are framed as decision support with human oversight.
Required Qualifications
- 4+ years of hands-on experience building and deploying applied AI/ML systems, including both architectural design and implementation (anomaly detection, time-series forecasting, NLP).
- Strong Python expertise and familiarity with core libraries (scikit-learn, PyTorch/TensorFlow, Hugging Face transformers, SHAP or equivalent explainability tools).
- Proven ability to ingest and engineer features from heterogeneous sources (numerical time series, graph/flow data, unstructured text) and align them temporally for fusion.
- Experience designing and combining multiple risk signals into composite indices or scoring systems; comfortable with both expert-driven and learned fusion logic.
- Practical knowledge of NLP for document classification, semantic similarity/drift detection, and sentiment extraction.
- Track record of building human-in-the-loop feedback mechanisms to improve model quality iteratively.
- Strong product/operational instincts: able to turn model outputs into alerts, rationale, and concise briefings for non-technical stakeholders.
- Excellent communication and collaboration—can work directly with compliance/risk SMEs and present to senior stakeholders.
- Must be based in the UK (due to regulatory engagement and coordination with UK-centric stakeholders).
Success Metrics (for the PoC)
- Reliable anomaly detection and depeg probability forecasting on synthetic/historical events with measurable lead-time advantage.
- Composite SPRI that meaningfully prioritises genuine risk incidents over noise.
- Alerts consistently enriched with contextual compliance/regulatory signals and accompanied by clear explainability.
- Analyst feedback loop functioning: measurable improvement in precision/recall or alert utility over iterations.
- Stakeholder-ready demo delivering end-to-end incident narratives and a concise recommendation report.
Engagement Details
- Duration: 6-month prototype engagement (with strong possibility to transition toward a production strategy/extended role).
- Reporting: Direct to PoC sponsor (CTO / Head of Risk); responsible for biweekly demos.
- Location requirement: UK-based
Application Materials Requested
- Brief case study or examples of similar systems (anomaly detection + compliance signal fusion, risk scoring, NLP drift detection).
- High-level sketch of how you would architect fusion of behavioural and regulatory signals into explainable alerts.
- References or prior work in regulated/financial contexts if available.
Skills
- Machine Learning
- Python