Job Description

Summary

As a member of the AI model team, you will drive innovation in supervised fine-tuning methodologies for advanced models. Your work will refine pre-trained models so that they deliver enhanced intelligence, optimized performance, and domain-specific capabilities designed for real-world challenges. You will work on a wide spectrum of systems, ranging from streamlined, resource-efficient models that run on limited hardware to complex multi-modal architectures that integrate data such as text, images, and audio.

We expect you to have deep expertise in large language model architectures and substantial experience in fine-tuning optimization. You will adopt a hands-on, research-driven approach to developing, testing, and implementing new fine-tuning techniques and algorithms. Your responsibilities include curating specialized data, strengthening baseline performance, and identifying as well as resolving bottlenecks in the fine-tuning process. The goal is to unlock superior domain-adapted AI performance and push the limits of what these models can achieve.

Responsibilities:

  1. Develop and implement new state-of-the-art and novel fine-tuning methodologies for pre-trained models with clear performance targets.
  2. Build, run, and monitor controlled fine-tuning experiments while tracking key performance indicators. Document iterative results and compare against benchmark datasets.
  3. Identify and process high-quality datasets tailored to specific domains. Set measurable criteria to ensure that data curation positively impacts model performance in fine-tuning tasks.
  4. Systematically debug and optimize the fine-tuning process by analyzing computational and model performance metrics.
  5. Collaborate with cross-functional teams to deploy fine-tuned models into production pipelines. Define clear success metrics and ensure continuous monitoring for improvements and domain adaptation.

Job requirements

  1. A degree in Computer Science or related field. Ideally PhD in NLP, Machine Learning, or a related field, complemented by a solid track record in AI R&D (with good publications in A* conferences).
  2. Hands-on experience with large-scale fine-tuning experiments, where your contributions have led to measurable improvements in domain-specific model performance.
  3. Deep understanding of advanced fine-tuning methodologies, including state-of-the-art modifications for transformer architectures as well as alternative approaches. Your expertise should emphasize techniques that enhance model intelligence, efficiency, and scalability within fine-tuning workflows.
  4. Strong expertise in PyTorch and Hugging Face libraries with practical experience in developing fine-tuning pipelines, continuously adapting models to new data, and deploying these refined models in production on target platforms.
  5. Demonstrated ability to apply empirical research to overcome fine-tuning bottlenecks. You should be comfortable designing evaluation frameworks and iterating on algorithmic improvements to continuously push the boundaries of fine-tuned AI performance.

Skills
  • Development
  • Machine Learning
  • Software Engineering
  • Team Collaboration
© 2025 cryptojobs.com. All right reserved.