About StackOne
StackOne is the
AI Integration Gateway powering the next generation of SaaS and AI Agents. Backed by
GV,
Workday Ventures, and
angels/advisors from DeepMind, OpenAI, GitHub &
Mistral, we’ve raised
$24M to enable developers to orchestrate thousands of
secure, scalable, and accurate actions in AI Agents.
With an
AI-native integration toolkit that delivers real-time execution, managed authentication, granular permissions, and full observability, all built with safety at its core, we’re now doubling down on
AI R&D & creating our own lab to push the boundaries of
tool calling for agents: training
specialized LLMs designed to outperform general-purpose models in what matters most:
precision, reliability, and safety in agentic execution.
About The Role
You'll help build a world where users of any agents can integrate with the tool of their choice in one click thanks to StackOne.
We are looking for an
AI Research Engineer with deep expertise in
large-scale model fine-tuning, dataset curation, and training infrastructure. Unlike our AI Engineer role, which focuses on applying and productionizing existing LLMs and agent frameworks, this role is focused on
pushing model performance through fine-tuning, synthetic data pipelines, and large-scale experimentation.
You will own, design and run experiments on cutting-edge architectures, manage distributed training clusters, and help curate & generate high-quality datasets. This role sits closer to the research/ML infra side than product engineering, but with a strong mandate for applied, production-ready results.
In this role, you will work with wider AI team of StackOne (comprising of other researchers and engineers) and report directly to the CTO.
Responsibilities
- Own the full lifecycle of model fine-tuning projects (objectives, dataset prep, training, eval, deployment handoff).
- Design and manage synthetic data generation workflows to augment real-world datasets.
- Build and maintain large-scale training infrastructure (multi-GPU/TPU clusters, orchestration, optimization).
- Develop tools for dataset curation, labeling, filtering, and augmentation.
- Conduct benchmarking, and evaluations to measure fine-tuning impact.
- Collaborate with the rest of the engineering team to integrate fine-tuned models into production stacks.
- Stay ahead of research in parameter-efficient fine-tuning, synthetic data, and LLM training.
What We're Looking For
- Background in deep learning, with emphasis on LLMs.
- Experience running large-scale distributed training jobs
- Understanding of synthetic data techniques and dataset pipeline design.
- Proficiency in evaluating LLMs with quantitative metrics and human evals.
- Desire to work in a fast-paced startup, taking ownership of projects e2e and bias towards shipping.
- (Preferred) Contributions to open-source ML libraries or published research in applied ML/LLM fine-tuning.
Benefits
- 25 days holiday + 1 additional day holiday per year of tenure
- Participation in the company’s employee share options plan
- Private health insurance (including dental & optical)
- Health, fitness and gift card discounts
- £1,000 for your home office set up + £500/year top-up
- Paid lunch in the office
- Annual team offsite to sunny spots (last ones were in Spain and Portugal ☀️)
- Join one of Europe’s fastest-growing startups
- Work with a veteran team of ex-employees of Google, Microsoft, Oracle, Coinbase, JP Morgan and more
- Cycle2Work and Electric Cars scheme
- Hybrid work set up - typically 2d in the office
Ready to help us change the game for SaaS integrations? Get in touch and let's chat!
We believe diversity drives innovation. We encourage individuals from all backgrounds to apply. As an equal-opportunity employer, we celebrate diversity and are committed to creating an inclusive environment for all employees.