About Luupli
Luupli is a social media app that has equity, diversity, and equality at its heart. We believe that social media can be a force for good, and we are committed to creating a platform that maximizes the value that creators and businesses can gain from it, while making a positive impact on society and the planet. Our app is currently in Beta Test, and we are excited about the possibilities it presents. Our team is made up of passionate and dedicated individuals who are committed to making Luupli a success.
Job Description
As an AI Engineer at Luupli, you will play a pivotal role in developing intelligent systems and orchestrating agentic workflows that power Luupli’s AI features. Your work will span Retrieval-Augmented Generation (RAG), multi-agent LLM orchestration, auto-captioning, generative media, and content moderation.
You’ll use frameworks like LangGraph, LangChain, and Google’s Agent Development Kit to build persistent, scalable AI services on Google Cloud Platform (GCP). This is a full-stack AI role that spans intelligent backend APIs, LLM agent orchestration, and integration with product-facing features.
Responsibilities
- Build and deploy multi-agent AI workflows using LangGraph, LangChain, or Google’s Agent Development Kit.
- Implement RAG pipelines using embeddings, semantic chunking, and vector databases (e.g., FAISS, Pinecone, Weaviate).
- Integrate hosted and open-source LLMs (OpenAI, Gemini, Claude, Ollama, Mistral) into intelligent systems.
- Build REST APIs with FastAPI and internal tools with Streamlit to expose AI functionality.
- Deploy production-grade services on GCP using Vertex AI, Cloud Run, Cloud Functions, IAM, and Pub/Sub.
- Embed AI into platform features such as auto-captioning, LuupForge (generative studio), feed personalization, and real-time moderation.
- Maintain modular, testable, observable, and secure code across the AI system lifecycle.
Requirements - 3+ years experience in applied AI/ML engineering (production-level deployments, not research-only).
- Strong Python development skills with full-stack AI engineering experience:
- FastAPI, Streamlit
- LangGraph, LangChain, or similar
- PyTorch, Transformers
- FAISS, Weaviate, or Pinecone
- Solid experience working with hosted APIs (OpenAI, Gemini) and self-hosted models (Mistral, Ollama, LLaMA).
- Deep understanding of LLM orchestration, agent tool-use, memory sharing, and prompt engineering.
- Hands-on experience with Google Cloud Platform (GCP); especially Vertex AI, Cloud Functions, Cloud Run, and Pub/Sub.
- Familiarity with best practices in cloud-based software development: containerization, CI/CD, testing, monitoring.
Nice to Have
- Experience with Google’s Agent Development Kit or similar agent ecosystems.
- Familiarity with multimodal AI (e.g., handling text, image, audio, or video content).
- Prior experience developing creator platforms, content recommendation engines, or social media analytics.
- Understanding of ethical AI principles, data privacy, and bias mitigation.
- Experience with observability tools (e.g., Sentry, OpenTelemetry, Datadog).
- Data engineering experience, such as:
- Building ETL/ELT pipelines
- Working with event-based ingestion and structured logs (e.g., user sessions, reactions, feeds)
- Using tools like BigQuery, Airflow, or dbt
- Designing or consuming feature stores for AI/ML applications
Compensation
This is an equity-only position, offering a unique opportunity to gain a stake in a rapidly growing company and contribute directly to its success.
To Apply
Please send your resume and cover letter to recruitment@luupli.com with the subject line: “AI Engineer”
As part of your cover letter, please respond to the following questions:
- This position is structured on an equity-only basis. Thus, it is presently unpaid until we secure seed funding. Given this structure, are you comfortable continuing with your application for this role?
- Have you built or contributed to agent-based AI systems using frameworks like LangGraph, LangChain, or Google’s Agent Development Kit?
- Do you have experience with Retrieval-Augmented Generation (RAG) systems and vector databases (e.g., FAISS, Pinecone, Weaviate)?
- Have you deployed AI systems on Google Cloud Platform? If not, which cloud platforms have you used and how?
- Have you integrated LLMs (e.g., OpenAI, Gemini, Claude) into autonomous or multi-step workflows?
- Can you explain how agents collaborate and maintain memory across tasked in multi-agent systems?
- What is your experience with prompt engineering, tool invocation, and orchestrated LLM workflows?
Do you have any public code repositories (e.g., GitHub), demo URLs, or project write-ups showcasing your work? Please include links.
Requirements
- Degree in Computer Science, Engineering, or related field
- Experience with machine learning, deep learning, NLP, and computer vision
- Proficiency in Python, Java, and R
- Strong knowledge of AI frameworks such as TensorFlow or PyTorch
- Excellent problem-solving skills and ability to work in a team environment