Full Stack Data Engineer

MillenniumNew York, NY
Onsite

About The Position

We are seeking a talented, motivated engineer with strong full-stack data engineering skills to join our innovative, dynamic team. This role focuses on building reliable, scalable data products and user experiences that power AI/ML modeling, agentic workflows, and reporting. You will work end-to-end - from data ingestion and transformation through to UI - to deliver production-grade solutions in a collaborative, fast-paced environment. Our Python based data platform is undergoing a major evolution toward a modern, cloud‑native ELT architecture. We are standardizing on Snowflake as our central data platform and dbt as our core transformation framework, implementing scalable, maintainable ELT practices that simplify ingestion, modeling, and deployment. This role will be pivotal in designing and building robust data pipelines and semantic layers that directly power our AI and machine learning initiatives—delivering clean, reliable, and well‑modeled data assets to our data science team for feature engineering, model training, and production inference. You will collaborate closely with data scientists and ML engineers to ensure our data ecosystem is optimized for experimentation speed, model performance, and seamless integration into downstream products and services.

Requirements

  • 5+ years in software engineering, with a full-stack background building complex scalable data-engineering pipelines using data warehouse technology, SQL with dbt, Python, AWS with Terraform, and modern UI technologies.
  • Strong experience with medallion data architecture patterns using data warehouse technologies (e.g. Snowflake), data transformation tooling (e.g. dbt), BI tooling and NoSQL data marts (e.g. Elastic Search/Open Search).
  • Solid understanding of unit testing, CI/CD automation, and quality assurance processes for both data pipeline testing and operational data quality tests.
  • Working knowledge of Agile development practices and workflows.
  • Bachelor’s or Master’s degree in Computer Science, Statistics, Informatics, Information Systems, or a related quantitative field.

Nice To Haves

  • Hands-on experience with large language models (LLMs) and agentic frameworks/workflows.
  • Familiarity with the ELK stack (Elasticsearch, Logstash, Kibana) for search and analytics solutions.
  • Experience with AWS cloud services; familiarity with SageMaker; and CI/CD tooling such as GitHub Actions or Jenkins.
  • Experience building user interfaces with Angular or a modern UI stack.
  • Broad understanding of equities, fixed income, derivatives, futures, FX, and other financial instruments.

Responsibilities

  • Collaborative development: partner with business stakeholders, data scientists, and engineering teammates to define and adopt modern data engineering practices.
  • Full-stack data engineering: build across the entire stack, including data ingestion/acquisition and transformation, APIs, front-end components, and automated test suites.
  • Specification and design: translate short- and long-term business requirements, architectural considerations, and competing timelines into clear, actionable specifications.
  • Code quality: write clean, maintainable, efficient code that adheres to evolving standards and quality processes, including unit tests and isolated integration tests in containerized environments.
  • Continuous improvement: contribute to agile practices and provide input on technical strategy, architectural decisions, and process improvements.

Benefits

  • discretionary performance bonus
  • comprehensive benefits package
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service