About Us
Tavus is a research lab pioneering human computing. We’re building AI Humans: a new interface that closes the gap between people and machines, free from the friction of today’s systems. Our real-time human simulation models let machines see, hear, respond, and even look real—enabling meaningful, face-to-face conversations. AI Humans combine the emotional intelligence of humans with the reach and reliability of machines, making them capable, trusted agents available 24/7, in every language, on our terms.
Imagine a therapist anyone can afford. A personal trainer that adapts to your schedule. A fleet of medical assistants that can give every patient the attention they need. With Tavus, individuals, enterprises, and developers can all build AI Humans to connect, understand, and act with empathy at scale.
We’re a Series A company backed by world-class investors including
Sequoia Capital, Y Combinator, and Scale Venture Partners.
Be part of shaping a future where humans and machines truly understand each other.
The Role
We’re looking for an
AI Researcher to join our core AI team and push the boundaries of large language modeling in the context of conversational AI. If you thrive in fast-moving startup environments, enjoy experimenting with new ideas, and love seeing your work come to life in production then you’ll feel right at home.
Your Mission 🚀
- Conduct research on large language modeling and adaptation for Conversational Avatars (e.g. Neural Avatars, Talking-Heads).
- Develop methods to model both verbal and non-verbal aspects of conversation, adapting and controlling avatar behavior in real time.
- Experiment with fine-tuning, adaptation, and conditioning techniques to make LLMs more expressive, controllable, and task-specific.
- Partner with the Applied ML team to take research from prototype to production.
- Stay up to date with cutting-edge advancements — and help define what comes next.
You’ll Be Great At This If You Have:
- A PhD (or near completion) in a relevant field, or equivalent research experience.
- Hands-on experience with LLMs or VLMs and a strong foundation in generative language models.
- Experience in fine-tuning/adapting LLMs for control, conditioning, or downstream tasks.
- Solid background in deep learning and familiarity with foundation model methods.
- Strong PyTorch skills and comfort building deep learning pipelines.
Nice-to-Haves
- Knowledge of large-scale model training and optimization.
- Broader understanding of generative AI across modalities.
- Exposure to software development best practices.
- A flexible, experimental mindset i.e. comfortable working across research and engineering.
- (Bonus) Publications at EMNLP, COLING, NeurIPS, ICLR, CVPR, ICCV.
Location
Preferred:
San Francisco (hybrid) or
London (office opening soon).
Remote within the
U.S. or
Europe available for exceptional candidates.