Maitai is reshaping how enterprise companies build with LLMs by creating the management layer for enterprise AI stacks. We're seeking a Founding Engineer to help tackle the hardest challenges surrounding LLMs in production while working at the cutting edge of opensource models and accelerated compute.
You'll be working on high-performance distributed systems that power our LLM proxy and fine-tuning infrastructure. Our backend is built with Python (Quart) and Go, running on Kubernetes across AWS and GCP, with Terraform handling infrastructure as code. You'll optimize real-time LLM autocorrections, request routing, and backend latency, while also improving our fine-tuning pipeline to balance speed, cost, and accuracy.
On the frontend side, you'll contribute to our Portal (React/TypeScript), where customers configure guardrails, fine-tune models, and test their stack. This role offers significant impact and ownership, working directly with founders to shape the product and technical direction.
We're a fast-moving, well-capitalized startup with growing customer demand. Our technology enables businesses to create AI models that are 10x faster and more accurate than closed-source alternatives, while providing more reliable inference through online guardrails that catch and fix model failures in real-time.
The ideal candidate is passionate about system optimization, has experience building and shipping end-to-end systems, and thrives in a fast-paced environment where engineers make product decisions. You'll be joining an elite team solving challenging technical problems at scale, with meaningful equity ownership and the opportunity to shape the future of enterprise AI infrastructure.