About Olumi
Olumi is a science-powered decision enhancement platform that supercharges how teams think, collaborate, and win. It connects strategy to decisions to outcomes, driving sharper thinking, breakthrough innovation, and strategic alignment.
The Role
Help develop our computational framework and prototype for a neurosymbolic-hybrid probabilistic inference engine that provides decision support for user-elicited scenarios and integrates diverse expert inputs.
Contract: 6-8 weeks, 3-4 days/week, remote‑first, and worldwide considered. We run two late-UK review windows each week, so please ensure you can join from 17:00 to 19:00 UK twice a week.
Upon successful completion of the contract, this role is intended to transition to a permanent position with a market-aligned salary and an equity grant. Equity will be awarded via EMI, or an equivalent plan elsewhere, with standard 4-year vesting and a 1-year cliff. Full details confirmed at conversion.
What you’ll do
- Define and validate an LLM-to-JSON schema to turn elicited‑English “context and constraints” into structured inputs.
- Conditional on an imputed model: implement standard Bayesian inference schemes: e.g. ≈ Beta-Binomial, Dirichlet-Multinomial updates, and simple Monte Carlo, and Markov Chain Monte Carlo inference.
- Compute posterior expectations and credible intervals, P(best), expected regret, feasibility, and sensitivity metrics.
- Expose a minimal POST/Inference API with strict validation and basic caching (Node/TypeScript preferred; a Python microservice is also acceptable). The API should return structured JSON suitable for our existing UI.
- Return a validated JSON run log (including parameters, constraints, and seed) for full reproducibility.
- Write unit tests (property + reproducibility), code comments and brief docs.
You’ll thrive if you have
- Bayes basics (familiarity with exponential family distributions, conjugate priors, Monte Carlo sampling).
- Strong TypeScript or Python; comfortable with numeric code and tests.
- Prompting for schema‑constrained extraction with strict shape validation.
- Pragmatic hygiene: input validation, log-space numerics where helpful, reproducible RNG, profiling for hotspots.
- Language: Fluent English (C1+) required, as strong written clarity is essential.
Nice to have
Node worker threads or Python multiprocessing, simple APIs (Express/FastAPI), and returning tidy data for charts.
LLM prompting: few-shot, step-by-step/scratchpad (CoT)
Run heavy work off the main thread (Node worker threads or Python multiprocessing).
Build small web APIs (Express for JS or FastAPI for Python).
LLM prompting basics:
- Show 2–3 examples first (few‑shot) to get stable outputs
- Ask the model to think step‑by‑step privately (scratchpad/chain of thought), but only return structured results
- Make the model return schema‑validated JSON (or follow a simple grammar).
- Use function/tool calling when available.
What you’ll get
Research oversight from a leading cognitive science/AI advisor from a top university, a sharp real-world project, and a lead role developing a self-contained, showcase PoC.
Contract and pay
Fixed‑term with milestone‑based payments (competitive graduate day rate). Immediate start preferred. Recent graduates are welcome.
How to apply
Please include: CV, GitHub/portfolio link, availability and 3–4 lines on a small probabilistic project you’ve built with a link to the code.
Selection process
- Short take-home (≤60 mins): implement a Beta–Binomial update and P(best) on a 3-option toy; then modify one input and report how the key metrics change; include two property tests. Describe how to use Markovian methods to sample from this and other posterior distributions.
- 30‑min review: Walk through choices, tests, and trade‑offs.
Olumi is an equal‑opportunity team. We welcome applicants from all backgrounds.
Sorry, no recruiters.