Software Engineer - AI Infrastructure

AssembledNew York City, NY
75dHybrid

About The Position

Assembled builds the infrastructure that underpins exceptional customer support, empowering companies like CashApp, Etsy, and Robinhood to deliver faster, better service at scale. With solutions for workforce management, BPO collaboration, and AI-powered issue resolution, Assembled simplifies the complexities of modern support operations by uniting in-house, outsourced, and AI-powered agents in a single operating system. Backed by $70M in funding from NEA, Emergence Capital, and Stripe, and driven by a team of experts passionate about problem-solving, we're at the forefront of support operations technology. We're looking for a software engineer to join our Infrastructure team-building and operating the core systems that power Assist, our rapidly growing AI agent platform for customer support. Assist automates support workflows across email, chat, and voice, and has grown from $0 to $1M in ARR in just 3 months. As adoption accelerates, we're investing deeply in scaling its infrastructure to meet increasing demand and security expectations from enterprise customers. As part of the AI Infrastructure team, you'll be responsible for the systems that enable Assist to be fast, reliable, and secure. You'll work on foundational platform components that power real-time LLM usage at scale, while also exploring how AI can be leveraged internally to make our engineering team more productive. This team is highly cross-functional, working closely with the AI, security, and product engineering teams. This is a high-ownership role for someone who's excited by 0-to-1 building and shaping the infrastructure backbone of our AI products.

Requirements

  • 6+ years of engineering experience, with past ownership of high-scale, production-critical infrastructure.
  • Experience with distributed systems and container orchestration (especially Kubernetes).
  • Experience with AI/ML platforms or excitement to build foundational infrastructure for LLM-based applications.
  • Ability to thrive in fast-paced environments with shifting requirements and ambiguous problem spaces.
  • Motivated by impact, enjoy deep technical challenges, and want to work cross-functionally across security, AI, and product.
  • Strong familiarity with one or more parts of our tech stack: AWS, Kubernetes + Karpenter, OpenAI, Anthropic, vector databases, Postgres + PgBouncer, Snowflake, Redis, Go, Python, Datadog, Mezmo, CloudWatch, Buildkite, CircleCI.

Responsibilities

  • Manage and scale the infrastructure that serves LLM-powered agents across chat, email, and voice.
  • Select inference strategies, integrate with model providers (e.g. OpenAI, Anthropic), and dynamically route traffic for performance and cost efficiency.
  • Own highly-available, fast-access storage and indexing layers optimized for real-time AI interactions.
  • Build systems for network-level intrusion detection (IDS/IPS), audit logging, and LLM usage policy enforcement.
  • Operate systems that surface key metrics-token usage, latency, cost per response, and quality signals.
  • Explore and evangelize the use of AI to accelerate internal engineering workflows.

Benefits

  • Generous medical, dental, and vision benefits.
  • Paid company holidays, sick time, and unlimited time off.
  • Monthly credits to spend on professional development, general wellness, Assembled customers, and commuting.
  • Paid parental leave.
  • Hybrid work model with catered lunches everyday (M-F), snacks, and beverages in our SF & NY offices.
  • 401(k) plan enrollment.

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Industry

Publishing Industries

Number of Employees

101-250 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service