Transform language models into real-world, high-impact product experiences. A1 is a self-funded AI group, operating in full stealth. We’re building a new global consumer AI application focused on an important but underexplored use case. You will shape the core technical direction of A1 - model selection, training strategy, infrastructure, and long-term architecture. This is a founding technical role: your decisions will define our model stack, our data strategy, and our product capabilities for years ahead. You won’t just fine-tune models - you’ll design systems: training pipelines, evaluation frameworks, inference stacks, and scalable deployment architectures. You will have full autonomy to experiment with frontier models (LLaMA, Mistral, Qwen, Claude-compatible architectures) and build new approaches where existing ones fall short. Why This Role Matters You are creating the intelligence layer of A1’s first product, defining how it understands, reasons, and interacts with users. Your decisions shape our entire technical foundation — model architectures, training pipelines, inference systems, and long-term scalability. You will push beyond typical chatbot use cases, working on a problem space that requires original thinking, experimentation, and contrarian insight. You influence not just how the product works, but what it becomes, helping steer the direction of our earliest use cases. You are joining as a founding builder, setting engineering standards, contributing to culture, and helping create one of the most meaningful AI applications of this wave. What You’ll Do Build end-to-end training pipelines: data → training → eval → inference Design new model architectures or adapt open-source frontier models Fine-tune models using state-of-the-art methods (LoRA/QLoRA, SFT, DPO, distillation) Architect scalable inference systems using vLLM / TensorRT-LLM / DeepSpeed Build data systems for high-quality synthetic and real-world training data Develop alignment, safety, and guardrail strategies Design evaluation frameworks across performance, robustness, safety, and bias Own deployment: GPU optimization, latency reduction, scaling policies Shape early product direction, experiment with new use cases, and build AI-powered experiences from zero Explore frontier techniques: retrieval-augmented training, mixture-of-experts, distillation, multi-agent orchestration, multimodal models What It’s Like to Work Here You take ownership - you solve problems end-to-end rather than wait for perfect instructions You learn through action - prototype → test → iterate → ship You’re calm in ambiguity - zero-to-one building energises you You bias toward speed with discipline - V1 now > perfect later You see failures and feedback as essential to growth You work with humility, curiosity, and a founder’s mindset You lift the bar for yourself and your teammates every day