Senior Software Engineer, Data Platform

Juniper Square
$165,000 - $200,000Remote

About The Position

We are building a next-generation intelligent data platform for private markets – a greenfield initiative that will reshape how financial data is ingested, normalized, validated, enriched, and distributed across a complex ecosystem. This is a foundational role on a small, high-caliber seed team working at the intersection of modern data engineering and applied AI. As a Senior Software Engineer on the Data Platform team, you will own the end-to-end delivery of core pipeline components: schema mapping, data normalization, validation, enrichment, and distribution to downstream systems. You will write production code every day, work closely with staff engineers to make meaningful architectural contributions, and help build the technical standards and practices that the broader team will grow into. This role is for someone who executes with high craft and speed, is fluent in agentic development as a first-class part of their workflow, and who wants the challenge and ownership that comes with building something genuinely new.

Requirements

  • 4–7 years of software engineering experience, with a track record of shipping production systems end-to-end
  • Strong full-stack engineering fundamentals – backend services, data pipelines, and API design; we will ask you to walk through systems you personally built
  • Hands-on experience with data pipeline or data warehouse engineering: ETL/ELT patterns, schema design, normalization, and data distribution
  • Production experience building with LLMs – prompt design, model integration, and output validation in real systems
  • Fluency with AI-assisted and agentic development workflows; you use these tools daily and evaluate their output critically
  • Experience with AWS data infrastructure; Redshift experience a plus
  • Strong written communication – able to document technical decisions clearly for engineering and product audiences

Nice To Haves

  • Experience with RAG pipelines, vector stores (e.g. OpenSearch), or document extraction systems
  • Background in financial services data – familiarity with fund administration, investment data schemas, or institutional reporting workflows is a meaningful differentiator
  • Experience building data products for external customers, not just internal tooling
  • Familiarity with evaluation frameworks for AI outputs: deterministic checks, cross-model comparison, or human-in-the-loop review patterns

Responsibilities

  • Build and ship production-quality implementations of the data normalization, schema mapping, validation, enrichment, and distribution pipeline for a net-new intelligent data warehouse
  • Write clean, well-tested, performant code across the full stack – backend services, data pipeline logic, and API integrations
  • Take end-to-end ownership of features from design through deployment, with accountability for correctness and reliability in production
  • Work closely with staff engineers to shape the architecture of a modern, AI-native data warehouse serving institutional financial clients
  • Bring thoughtful input on schema design, normalization approaches, and API patterns – and execute those decisions with precision
  • Identify and raise technical risks early; propose and implement solutions rather than waiting to be directed
  • Use agentic coding tools and LLM-assisted development as your primary workflow – this is how the entire team operates
  • Critically evaluate AI-generated code for correctness, edge cases, and regressions – shipping quality output regardless of how it was produced
  • Contribute to the team’s evolving practices around AI-accelerated development and testing
  • Build and maintain data validation checks, monitoring, and observability tooling that keeps the pipeline trustworthy at scale
  • Participate in on-call and production support, diagnosing and resolving data quality issues quickly and thoroughly
  • Write and maintain clear technical documentation for the systems you build
  • Partner effectively with staff and senior engineers, your engineering manager, and product management to translate requirements into well-scoped, executable work
  • Participate in design and code reviews, offering and receiving feedback constructively
  • Develop domain intuition around private markets data – fund administration, investment data schemas, institutional reporting – to make better technical decisions

Benefits

  • Health, dental, and vision care for you and your family
  • Life insurance
  • Mental wellness coverage
  • Fertility and growing family support
  • Flex Time Off in addition to company-paid holidays
  • Paid family leave, medical leave, and bereavement leave policies
  • Retirement saving plans
  • Allowance to customize your work and technology setup at home
  • Annual professional development stipend
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service