AI Platform Developer

Harley Ellis DevereauxBoston, MA
6hHybrid

About The Position

HED is hiring an AI Platform Developer to build durable, production-grade AI agent systems that integrate with governed data. About HED We are a team that is full of ideas, experience, creativity, passionate opinions, insatiable curiosity, uncompromising integrity, commitment, and skill. Our culture is about aspiration, embracing change and challenges, listening to (and learning from) each other, encouraging continual learning, and inspiring collective growth. As an inclusive, integrated architecture and engineering practice, we value the diversity of perspectives, experiences, abilities, and expertise that advance both the work we do, and the world we share. POSITION SUMMARY You own the operational foundations that make AI safe and maintainable—connectors into the Bronze layer, versioned interfaces, logging and auditability, evaluation, cost controls, and guardrails. This is an engineering role focused on reliability and lifecycle thinking, not a “light automation” position. You collaborate directly with internal stakeholders to translate needs into systems that hold up under real usage and evolve with the business.

Requirements

  • Bachelor’s degree in computer science, data engineering, or a related field (or equivalent experience).
  • 5+ years of software engineering and/or data engineering experience, including building and operating production services.
  • Demonstrated experience deploying and supporting AI/LLM systems in production (monitoring, incidents, iteration, and measured improvement).
  • Hands-on multi-agent orchestration experience (e.g., LangChain, AutoGen, CrewAI, or similar), including workflow design and failure handling.
  • Experience owning connectors/ingestion pipelines (reliability patterns such as retries, idempotency, schema/version management, and alerting).
  • Strong Python engineering skills; comfort working with APIs, data stores, and workflow/orchestration tooling.
  • Operational discipline: logging, audit trails, debugging methodology, cost/token controls, and rollback mindset.
  • Documentation-first habits (design notes, runbooks, interface contracts) and the ability to communicate tradeoffs to non-technical stakeholders.
  • Comfortable using AI-enabled productivity tools for meetings and knowledge capture (e.g., Fireflies AI Note Taker) while maintaining privacy and compliance boundaries.

Nice To Haves

  • Databricks/lakehouse + medallion familiarity; experience implementing governance/audit requirements; AEC or project-based domain exposure.

Responsibilities

  • Design, build, and orchestrate multi-agent workflows (handoffs, coordination, retries/fallbacks, and failure handling) for business-critical use cases.
  • Develop agents with role-appropriate personas, boundaries, and context so outputs are consistent, trustworthy, and aligned to business intent.
  • Own Bronze-layer ingestion: build and maintain connectors/interfaces; manage schema drift, reliability, change handling, monitoring, and alerting.
  • Treat data inputs/outputs as contracts—versioned, traceable, testable—and implement validation at data boundaries.
  • Implement observability across the AI lifecycle (structured logs, traces, evaluation artifacts, and audit trails) so systems are debuggable and reviewable.
  • Implement guardrails and controls: budgets, rate limits, model selection strategy, safe defaults, and kill-switches to prevent runaway behavior.
  • Apply governance and access boundaries early (permissions, sensitive data handling, traceability, compliance posture) rather than bolting it on later.
  • Produce durable documentation (architecture notes, runbooks, interface contracts) and enable others to operate and extend the platform.
  • Provide evidence-based buy vs. build recommendations, and advocate for responsible sunsetting when systems reach end-of-life.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service