AI Integrations Backend Engineer

Orby AIPittsburgh, PA
42d

About The Position

We're hiring an AI Integrations Backend Engineer to bridge our core numeric intelligence engine with the rapidly evolving agentic ecosystem. You'll design and maintain the interfaces that connect Wood Wide AI's embeddings and reasoning engine to LLM agents, orchestration frameworks, and Model Context Protocol (MCP)-based tools. This role blends backend engineering, AI systems integration, and developer experience to make our product plug-and-play for AI builders and enterprises alike. You'll work closely with the founding team to turn cutting-edge research into robust, production-grade microservices and SDKs that power agentic workflows.

Requirements

  • 1-3 years backend engineering experience (Python, FastAPI, Flask, or similar).
  • Proven experience designing API-first microservices and integrating with AI or ML systems.
  • Experienced in designing and managing scalable data pipelines and storage solutions using tools like Postgres, Redis, Kafka, and Airbyte.
  • Experience with Docker, GitHub Actions, and cloud providers (e.g. GCP, AWS).
  • Strong fundamentals in REST/gRPC design, authentication, and CI/CD.
  • Familiarity with LLM tool-calling APIs, agentic orchestration frameworks, and MCP-based architectures.
  • Experience with LangGraph, PydanticAI, MCP servers, or equivalent orchestration stacks.
  • Knowledge of vector databases (FAISS, pgvector, Pinecone, Weviate)
  • Experience using LLM APIs such as OpenAI and Anthropic's.
  • Clear communication, strong documentation, and collaborative mindset.

Nice To Haves

  • Solid understanding of async programming, streaming APIs, and structured data handling.
  • Background in numerical or tabular data systems, or working with embeddings and ML inference pipelines.

Responsibilities

  • Design and build core backend APIs using Python (FastAPI) to serve numeric intelligence functions like embedding generation, reasoning calls, and model inference endpoints.
  • Integrate our engine with LLM agents and tool-calling interfaces (OpenAI, Anthropic, Gemini) to enable structured reasoning over numeric data.
  • Develop microservices and a Model Context Protocol (MCP) server, exposing modular "tools" that agents can securely invoke to process tabular or time-series data.
  • Orchestrate agentic workflows using frameworks such as Vercel AI SDK, LangGraph, PydanticAI, or custom planners, and evaluate trade-offs in performance and observability.
  • Build and maintain a Python SDK with clean abstractions and developer-first ergonomics.
  • Develop data connectors for major environments such as Databricks, Snowflake, Postgres, and S3/GCS.
  • Implement auth, rate limiting, usage metering, and structured logging for reliable production operations.
  • Containerize and deploy microservices via Docker, GitHub Actions, and GCP/AWS, ensuring scalability and maintainability.
  • Collaborate cross-functionally with ML and DX teammates to ensure seamless data flow and user experience.

Benefits

  • Join a high-caliber founding team defining a new layer of the AI stack.
  • Work on the next frontier of GenAI at the intersection of symbolic reasoning, numeric ML, and agentic intelligence.
  • Ship real systems that power the next generation of AI agents.
  • Flexible, high-ownership work environment with deep technical impact and visibility.

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Entry Level

Industry

Professional, Scientific, and Technical Services

Education Level

No Education Listed

Number of Employees

11-50 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service