Chief Technical Officer

Warp Speed Holdings LLCHouston, TX

About The Position

Warp Speed Holdings is a growing holding company that invests in and supports a diverse portfolio of start-ups, early-stage ventures, and established businesses. In addition to providing capital, the company delivers hands-on operational support across key areas such as finance, accounting, human resources, technology, compliance, and other essential back-office functions. By combining investment resources with experienced operational leadership, Warp Speed Holdings helps its portfolio companies build strong foundations and scale efficiently. True to its name, the organization is known for moving quickly, making strategic decisions, and driving sustainable growth across the companies it supports. This position would support Ai Senior Manager. AI Senior Manager is a workforce intelligence consulting firm that helps mid-market companies build the operational data foundation they need - structured workforce tracking, automated reporting, AI-generated management insights - so leadership can make faster, smarter decisions backed by real data. This role designs the architecture and builds every system from scratch. The lead engineer builds and owns the full pipeline: data extraction from client platforms via API, metric computation and analysis, AI-generated narrative insights via Anthropic Claude, automated report assembly, and scheduled delivery. This position plays a key role in turning client workforce data into the executive-grade intelligence reports that are the core product of the business. The right candidate operates comfortably with ambiguity, makes technical decisions independently, and ships working systems without waiting for detailed specifications. This role includes potential equity participation in a growing start-up with over a million dollars in annual revenue.

Requirements

  • 3+ years of hands-on experience building data pipelines and working with REST APIs in Python
  • Strong experience with API integration patterns including OAuth authentication, pagination, rate limiting, error handling, and retry logic across multiple third-party platforms
  • Experience integrating with LLM APIs (Anthropic Claude or OpenAI) - not just calling them, but building production systems around them with structured prompts, output validation, and quality guardrails
  • Solid understanding of data pipeline design: extraction, transformation, loading, scheduling, monitoring, and handling the operational reality that production data is never as clean as test data
  • Experience with cloud infrastructure, particularly AWS services (Lambda, RDS, S3, EventBridge) or equivalent
  • Strong proficiency in Python including pandas, numpy, requests, and building scheduled automation scripts
  • Demonstrated ability to work independently with minimal direction - taking ambiguous objectives, making architectural decisions, and shipping working solutions without detailed specifications
  • Strong communication skills and ability to explain technical decisions and tradeoffs to non-technical team members
  • Experience with version control (Git/GitHub), CI/CD pipelines, and production deployment practices

Nice To Haves

  • Experience with workforce analytics, HR tech, or operational data platforms
  • Statistical analysis background including trend detection, anomaly detection, and time series analysis
  • Web development experience (Next.js, React) for building client-facing dashboards
  • Experience with Google Sheets API (gspread) and PostgreSQL database design and administration
  • Experience with PDF generation pipelines (weasyprint, pdfkit, or similar)
  • Experience with transactional email delivery systems (Resend, SendGrid, or similar)
  • Previous startup or early-stage company experience where you were the sole or primary engineer
  • Familiarity with the ActivTrak API or similar workforce analytics platforms

Responsibilities

  • Oversee and coordinate reporting required by state and federal supervisory regulatory agencies, including but not limited to HMDA reporting, NMLS Mortgage Call Reports, LEEF reporting (as applicable)
  • Design, build, and maintain the complete data extraction pipeline, connecting to client ActivTrak instances and project management tools (Asana, Monday.com, ClickUp) via REST APIs to pull workforce analytics data on a scheduled cadence
  • Build and maintain the analysis and computation layer that transforms raw extracted data into workforce metrics including attendance rates, productivity scores, focus time, task completion rates, workload distribution, trend comparisons, and anomaly detection
  • Integrate with the Anthropic Claude API to generate AI-powered narrative insights for client reports, including prompt engineering, structured output parsing, and programmatic validation that ensures the AI never fabricates statistics
  • Build the report generation and delivery system - Markdown templates populated with computed data and AI narratives, converted to branded PDFs, and delivered via email on each client's contracted schedule
  • Build and maintain monitoring and alerting for the full pipeline - sync failures, data completeness drops, AI generation errors, and missed delivery windows - so problems are caught before they reach clients
  • Design and manage the data storage layer and migrating to PostgreSQL as client volume grows
  • Configure and manage scheduled pipeline execution, initially via GitHub Actions and migrating to AWS Lambda and EventBridge as infrastructure matures
  • Onboard new clients into the pipeline - configure API connections, build extraction scripts, verify data quality, set up reporting templates, and confirm automated delivery within 3 business days of receiving credentials
  • Ensure strict client data isolation across all storage and pipeline systems so that one client's data can never appear in another client's reports or analysis
  • Maintain a prompt library and per-client context documents for the AI narrative engine, iterating on prompt quality based on client feedback to ensure insights are specific, accurate, and actionable rather than generic
  • Support infrastructure scaling decisions - database migration timing, compute architecture, caching strategies, and capacity planning based on client growth trajectory
  • Build Phase 2 features as the pipeline stabilizes, including automated KPI tracking for senior manager performance, predictive workforce modeling, anomaly detection, and eventually a client-facing analytics dashboard

Benefits

  • potential equity participation
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service