Staff Software Engineer — Search Platform, Ingestion & Indexing

Thomson ReutersFrisco, TX
$118,400 - $219,800Hybrid

About The Position

Advanced Content Engineering (ACE) is seeking a Staff Software Engineer to serve as the technical anchor for the search platform’s ingestion and indexing systems. The platform processes millions of documents across TR’s legal, tax, and professional content corpora — parsing, chunking, enriching, embedding, and indexing them into a hybrid search engine that powers both human-facing search interfaces and autonomous AI agents. Getting this pipeline right, at scale, with zero-downtime operations and increasingly agentic retrieval patterns, is one of the platform’s most consequential engineering challenges. This role owns the design, implementation, and operational health of the document ingestion pipeline and search index management systems — from the Kafka-based streaming infrastructure that moves documents through processing stages, to the Vespa application architecture that stores and serves them. Staff Engineers on this team define, build, test, deploy, scale, and operate what they ship — full-stack ownership is not a principle we aspire to, it is the daily reality. AI-assisted development is the team norm, not the exception, and constant delivery to production is the expectation. This is a role for someone who sets architectural boundaries, not just executes within them

Requirements

  • Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field
  • 8+ years of software engineering experience, with demonstrated progression to staff-level or equivalent technical leadership — including ownership of a functional area and leadership of significant cross-functional projects
  • Deep expertise in distributed stream processing: designing, building, and operating high-throughput, fault-tolerant event-driven pipelines using Kafka or equivalent technologies at production scale
  • Production experience with Vespa, OpenSearch, or Elasticsearch — including schema design, ranking profile configuration, and end-to-end application lifecycle management
  • Mastery of Python with strategic awareness of language and framework selection; strong software engineering fundamentals including test strategy, performance architecture, and system design
  • Proficiency with AWS cloud services used in data pipeline and search infrastructure (MSK, ECS, Lambda, DynamoDB, Step Functions, CloudWatch), with infrastructure-as-code experience (Terraform or AWS CDK)
  • Demonstrated ability to take full operational responsibility end-to-end — defining SLOs, building observability, running on-call, and driving systematic improvements from incident retrospectives — with a track record of shipping to production frequently and removing delivery friction proactively
  • Comfort and fluency with AI-assisted development tools; you use them to move faster and produce higher-quality work, not as a novelty
  • Track record of establishing architectural principles, cross-system design patterns, and documentation standards that improve the broader team’s engineering quality

Nice To Haves

  • Experience operationalizing ML models in production: inference serving, model promotion pipelines, canary rollouts, and production observability for model quality signals
  • Familiarity with agentic retrieval patterns — multi-hop retrieval, latency budget management across retrieval hops, context window optimization, and stateful session design
  • Experience with online search analytics: instrumenting systems for query performance monitoring, A/B or interleaved ranking experiments, and query log analysis to surface relevance gaps
  • Experience with embedding pipelines, vector indexing, and hybrid (dense + sparse) retrieval architectures in a production context
  • Familiarity with Protobuf schema design and schema registry governance patterns (Confluent Schema Registry or equivalent)
  • Experience building self-service or multi-tenant platform infrastructure where reliability and correctness directly affect multiple downstream teams
  • Background in AI ethics frameworks and responsible deployment of machine learning components in production pipelines

Responsibilities

  • Plan, design, develop, and own the end-to-end document ingestion pipeline — a Kafka-based stream processing architecture that moves documents through parsing, chunking, enrichment (entity extraction, embedding generation, metadata enrichment), and indexing stages — including all fault tolerance, version ordering, and at-least-once delivery guarantees
  • Architect and implement pluggable, configurable pipeline components (parsers, chunkers, enrichers, indexers) that client teams can assemble into custom topologies via the platform’s self-service APIs, while maintaining reliable, observable, and performant execution
  • Own the platform’s Protobuf-based document schema and schema registry integration — establishing schema governance standards, enforcing backward-compatible evolution, and ensuring reliable serialization across all pipeline stages
  • Design and implement dual-flow ingestion: a high-throughput batch path for full reindexing and a low-latency incremental path for real-time document updates, with strong guarantees around document version ordering and idempotent processing
  • Lead the migration of ingestion infrastructure from OpenSearch to Vespa, including design of Vespa document processors, custom Kafka feeders, and application package architecture — resolving complex technical challenges that have little or no precedent within the team
  • Own the end-to-end lifecycle for custom models integrated into the ingestion pipeline — re-ranking models, embedding models, and enrichment components — including inference serving behind a stable API surface, latency SLO management, hardware and runtime configuration (batching, quantization), and scaling
  • Build and operate the model promotion pipeline: the CI/CD workflow that moves a model artifact from the fine-tuning team through staging to production, including versioning, canary rollouts, and rollback mechanisms — ensuring the platform team can operate model updates independently without depending on the research team for production changes
  • Define and maintain integration contracts between custom models and downstream pipeline components — governing input/output schemas, compatibility requirements, and the governance process for model updates that ensures search pipeline consumers are not broken by changes upstream
  • Instrument model serving for production observability: latency distributions, throughput, error rates, and quality signals such as re-ranking score distributions — enabling the team to detect regressions or model drift without requiring the fine-tuning team’s involvement
  • Own the search engine layer end-to-end: design and operate Vespa (and OpenSearch during transition) index configurations, ranking profiles, schema definitions, and application package lifecycle management — applying architectural principles that scale to the platform’s long-term content and tenancy goals
  • Build and operate zero-downtime index management: shadow indexing, blue/green index promotion, and rolling reindex workflows that keep the platform available during major infrastructure changes
  • Implement and maintain the Component Registry and Index Registry — the platform’s catalog of reusable processing components and active index configurations — with a focus on correctness, observability, and safe concurrent modification
  • Develop the full-reindex and incremental-update orchestration logic, including change detection, document tracking, Kafka topic management, and DynamoDB-backed state management
  • Design ingestion and indexing infrastructure with agentic retrieval patterns as a first-class concern — including explicit latency budgets per retrieval hop, chunking and result compression strategies optimized for token economy in context windows, and index boundary definitions that give agents clean, predictable tool contracts
  • Build trace-level observability into the retrieval stack that captures which tools were called, in what order, and with what inputs — enabling reliable diagnosis and reproduction of failures in non-deterministic agentic retrieval paths
  • Design session state and cache invalidation patterns for multi-turn agentic search: reasoning carefully about cache validity windows, session state scope (per-user, per-session, per-query), and mechanisms to prevent stale context from corrupting downstream agent responses
  • Build and own the integration between the ingestion pipeline and the platform’s offline evaluation framework — ensuring that experiment runs produce query/result outputs that feed seamlessly into the search grading tool, supporting gold test set maintenance, LLM-as-judge evaluation, and side-by-side ranking comparison across pipeline versions
  • Instrument the query and retrieval stack for online analytics: real-time query latency and throughput monitoring, query log collection for session analysis, and the infrastructure to support A/B and interleaved ranking experiments in production — generating the signals that connect low-level search metrics to downstream product KPIs
  • Partner with TR Labs and research scientists to ensure that new search components can be evaluated in isolation — with automated offline evaluation on every build and a clear path from evaluation results to production promotion decisions
  • Take full operational responsibility for ingestion and indexing infrastructure: define SLOs, set measurable goals and meet them, build and maintain CloudWatch dashboards and alarms, and participate in on-call rotations — you built it, you own it, you run it
  • Treat delivery friction as the enemy: identify and remove obstacles that slow the team’s ability to ship ingestion and indexing changes to production safely and frequently — improving CI/CD pipelines, deployment automation, and local development workflows as a standing priority
  • Instrument pipeline components with distributed tracing, structured logging, and rich metrics — establishing documentation standards and knowledge management practices so that the team and platform consumers can understand system behavior at all times
  • Design and implement resilient fault tolerance mechanisms — dead-letter queues, retry strategies with exponential backoff, circuit breakers, consumer lag monitoring — that make the pipeline robust to downstream failures and transient errors
  • Drive system-level performance architecture: profiling ingestion throughput and indexing latency, identifying bottlenecks, and implementing optimizations that meet platform SLOs under peak load
  • Serve as the team’s deepest technical authority on document processing pipelines and search engine internals — guiding architectural decisions, resolving technical ambiguity, and establishing cross-system design patterns that raise the quality bar across the team
  • Lead significant projects and initiatives that span multiple engineers and interact with other teams; determine work priorities based on strategic direction; recommend modifications to team operations and make needed adjustments to short-term priorities while maintaining strategic focus
  • Mentor and develop Senior and mid-level engineers — providing coaching, technical direction, and educational opportunities in modern distributed systems, stream processing, search infrastructure, and AI-assisted development practices
  • Collaborate closely with TR Labs and research scientists to integrate new chunking strategies, embedding models, and enrichment techniques into the pipeline in a safe, well-instrumented, and ethically responsible way
  • Deliver effective presentations on complex technical concepts to both technical and non-technical stakeholders; develop strategic plans for technology implementation that align with business objectives

Benefits

  • Hybrid Work Model: We’ve adopted a flexible hybrid working environment (2-3 days a week in the office depending on the role) for our office-based roles while delivering a seamless experience that is digitally and physically connected.
  • Flexibility & Work-Life Balance: Flex My Way is a set of supportive workplace policies designed to help manage personal and professional responsibilities, whether caring for family, giving back to the community, or finding time to refresh and reset. This builds upon our flexible work arrangements, including work from anywhere for up to 8 weeks per year, empowering employees to achieve a better work-life balance.
  • Career Development and Growth: By fostering a culture of continuous learning and skill development, we prepare our talent to tackle tomorrow’s challenges and deliver real-world solutions. Our Grow My Way programming and skills-first approach ensures you have the tools and knowledge to grow, lead, and thrive in an AI-enabled future.
  • Industry Competitive Benefits: We offer comprehensive benefit plans to include flexible vacation, two company-wide Mental Health Days off, access to the Headspace app, retirement savings, tuition reimbursement, employee incentive programs, and resources for mental, physical, and financial wellbeing.
  • Culture: Globally recognized, award-winning reputation for inclusion and belonging, flexibility, work-life balance, and more. We live by our values: Obsess over our Customers, Compete to Win, Challenge (Y)our Thinking, Act Fast / Learn Fast, and Stronger Together.
  • Social Impact: Make an impact in your community with our Social Impact Institute. We offer employees two paid volunteer days off annually and opportunities to get involved with pro-bono consulting projects and Environmental, Social, and Governance (ESG) initiatives.
  • Making a Real-World Impact: We are one of the few companies globally that helps its customers pursue justice, truth, and transparency. Together, with the professionals and institutions we serve, we help uphold the rule of law, turn the wheels of commerce, catch bad actors, report the facts, and provide trusted, unbiased information to people all over the world.
  • comprehensive benefits package to our employees. Our benefit package includes market competitive health, dental, vision, disability, and life insurance programs, as well as a competitive 401k plan with company match. In addition, Thomson Reuters offers market leading work life benefits with competitive vacation, sick and safe paid time off, paid holidays (including two company mental health days off), parental leave, sabbatical leave.
  • optional hospital, accident and sickness insurance paid 100% by the employee; optional life and AD&D insurance paid 100% by the employee; Flexible Spending and Health Savings Accounts; fitness reimbursement; access to Employee Assistance Program; Group Legal Identity Theft Protection benefit paid 100% by employee; access to 529 Plan; commuter benefits; Adoption & Surrogacy Assistance; Tuition Reimbursement; and access to Employee Stock Purchase Plan.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service