Day Rate Contract - Option To Convert To Permanent In The Future
Join one of the UK's largest banks building next‑generation AI capabilities with a strong commitment to safe, explainable, and trusted AI. This team is developing cutting‑edge guardrail technologies to ensure AI systems behave reliably across text, voice, and emerging multimodal modalities.
This role is ideal for a curious, high‑end thinker (such as a recent Master’s or PhD graduate) with a passion for responsible AI, agentic systems, and the scientific foundations behind guardrail effectiveness. You will work at the intersection of research, model development, and deep validation, contributing to safety frameworks that shape the organisation’s AI strategy.
What You’ll Do
Research & Explore
- Conduct advanced research into AI guardrails, agentic behaviours, and safe model‑interaction patterns.
- Explore state‑of‑the‑art methods across LLMs, multimodal models, and emerging agent systems.
- Investigate niche areas of AI safety such as unintended behaviours, boundary testing, and robustness.
Build & Experiment
- Develop prototype models, safety mechanisms, and evaluation tools.
- Build and refine guardrail mechanisms that operate across these modalities.
- Experiment with multimodal inputs including:
- Text
- Voice
- Video
Deep Testing & Validation
- Design and run high‑depth validation experiments to confirm guardrail effectiveness.
- Stress‑test models for security, misuse, red‑teaming scenarios, and failure boundaries.
- Support development of automated testing frameworks for AI controls.
Contribute to Responsible AI Strategy
- Help validate controls ensuring AI systems meet internal responsible AI standards.
- Collaborate with engineers, safety specialists, and governance teams.
- Produce high‑quality research insights to guide product and platform direction.
What We’re Looking For
- Strong research credentials (PhD, MPhil, MSc, or equivalent research experience).
- Familiarity with Python‑based research frameworks.
- Strong foundational knowledge in machine learning, foundation models, or multimodal AI.
- Enthusiasm for AI safety, guardrails, and responsible‑AI frameworks.
- Experience building or fine‑tuning models (open‑source or proprietary).
- Ability to design experiments, measure model behaviour, and interpret results.
- Curiosity about AI alignment, agentic behaviour, and interpretability.
- Exposure to LLM or multimodal model evaluation.
Nice to have:
- Experience working with synthetic data, evaluation sets, or adversarial testing.
- Interest in governance, risk, or AI assurance.
Why Join?
This is a rare opportunity to work on advanced AI research within a major organisation deploying AI at enterprise scale. You’ll join a growing research capability, exploring cutting‑edge topics while ensuring AI is developed ethically, responsibly, and with world‑class guardrails.
You’ll benefit from:
- Access to advanced tools and emerging models.
- Opportunities to publish internal research and influence strategic direction.
- Mentorship from experienced AI and safety specialists.
- A collaborative environment that values experimentation and novel thinking.