Junior AI Engineer – Master’s Graduate Role
Location: London (Hybrid)
Salary: £40,000 – £45,000 + Bonus + Benefits
Start Date: ASAP
About the Opportunity:
We are seeking an ambitious and intellectually curious Junior AI Engineer to join a dynamic, fast-growing team operating at the forefront of artificial intelligence, large language models (LLMs), and cloud-native applications.
This is an excellent opportunity for a recent Master’s graduate eager to apply their academic and project experience to designing and deploying LLM-powered solutions and retrieval-augmented generation (RAG) systems within a modern cloud environment.
You’ll collaborate with experienced AI engineers, data scientists, and cloud specialists to transform ideas into production-ready applications that drive automation, security, and knowledge retrieval across multiple industries.
What You’ll Be Doing:
- Designing, developing, and deploying LLM applications (e.g. GPT, LLaMA, Claude) integrated with RAG pipelines
- Implementing end-to-end workflows: from embedding generation and vector search to model fine-tuning, deployment, and monitoring
- Building scalable retrieval pipelines using vector databases (e.g. Pinecone, FAISS, Weaviate, Milvus)
- Collaborating with engineers and cloud architects to deliver AI services on AWS, Azure, or GCP
- Applying NLP, generative AI, and retrieval methods to solve real-world challenges in automation and decision support
- Contributing to internal R&D, staying current with advancements in LLMs, RAG frameworks, and generative AI
What We’re Looking For:
- A recently completed Master’s degree from a Russell Group university in Artificial Intelligence, Computer Science, Data Science, Engineering, Mathematics, or a related discipline
- Clear demonstrated project experience (academic research, dissertation work, or personal projects) involving LLMs and/or RAG pipelines
- Strong programming skills in Python (e.g. Hugging Face, LangChain, PyTorch, TensorFlow, scikit-learn)
- Familiarity with cloud technologies (AWS, Azure, or GCP) and distributed systems concepts
- Strong problem-solving mindset with a passion for applying AI in practical, scalable ways
- Excellent teamwork and communication skills, with the ability to explain complex concepts clearly
- Full right to work in the UK (we are unable to offer visa sponsorship for this role)
Desirable (Not Essential):
- Experience with vector databases (Pinecone, Weaviate, FAISS, Milvus)
- Exposure to MLOps practices, containerisation (Docker), or orchestration (Kubernetes)
- Knowledge of prompt engineering, fine-tuning, or evaluation of LLM performance
- Contributions to open-source AI/ML projects or published academic research
Benefits:
💰 Competitive Salary & Bonus: £40,000 – £45,000 plus bonus
🏡 Hybrid Working: Flexible balance of office and remote work
📈 Career Growth: Mentorship, structured training, and clear progression paths
🛠 Modern Tech Stack: Hands-on with the latest LLM, RAG, and cloud-native tools
🤝 Collaborative Culture: Join a supportive, innovative engineering team
✨ Additional Perks: Pension scheme, private healthcare, and wellbeing initiatives
How to Apply:
If you’re excited by the opportunity to apply your academic and project work to real-world AI challenges, please send us your CV. A member of our team will reach out quickly to discuss the next steps.