Senior Data Engineer

NeloNew York, NY
Onsite

About The Position

We are looking for a Senior Data Engineer to help design, build, and operate the core data platform that powers analytics, machine learning, and business decision-making at Nelo. This is a hands-on, high-impact role for an experienced engineer who enjoys working across the full data lifecycle, from ingestion and transformation to reliability, scalability, and ML enablement. You will partner closely with Analytics, Product, Engineering, Marketing, Risk, and Machine Learning teams to ensure our data infrastructure is robust, scalable, and easy to build on as Nelo continues to grow.

Requirements

  • At least 5 years of experience in data engineering, software engineering, or backend engineering roles with significant ownership of production data systems.
  • Strong proficiency in Python for building data pipelines and infrastructure.
  • Advanced SQL skills and deep experience with data modeling for analytics and ML use cases.
  • Hands-on experience building ETL/ELT pipelines using tools or frameworks such as Airflow, AWS Glue, dbt, or similar orchestration systems.
  • Experience working with cloud data warehouses and query engines such as Athena/Presto, Redshift, BigQuery, or Snowflake.
  • Familiarity with big data or distributed processing frameworks such as Spark (or equivalent).
  • Experience designing and maintaining CI/CD pipelines for data workflows.
  • Strong understanding of data reliability, observability, and best practices for production systems.
  • Ability to write clean, maintainable, and well-tested code.
  • Proven ability to work cross-functionally with Analytics, ML, and Product teams.
  • Strong communication skills to explain technical concepts to non-engineers and align on trade-offs.

Nice To Haves

  • Exposure to feature stores, ML data pipelines, or close collaboration with ML Engineering teams is a strong plus.
  • Experience with AWS (S3, IAM, Lambda, Glue, EMR, etc.) or similar cloud ecosystems.

Responsibilities

  • Own and evolve the data platform: Design, build, and maintain scalable, reliable data pipelines and datasets that power analytics, reporting, and machine learning use cases across the company.
  • Build and maintain ETL/ELT pipelines: Develop production-grade pipelines that ingest data from transactional systems, third-party providers, and event streams into our data warehouse and feature store.
  • Enable analytics and business teams: Partner with Data Analytics and stakeholders to ensure data is well-modeled, documented, and accessible for self-service analysis.
  • Support machine learning workflows: Build and maintain feature pipelines and feature stores that support model training, validation, and online/offline inference.
  • Ensure data quality and reliability: Implement data quality checks, monitoring, alerting, and SLAs to ensure trust in our data products.
  • Improve developer experience: Build tooling, abstractions, and CI/CD pipelines that make it easier and safer to develop, test, and deploy data pipelines.
  • Collaborate cross-functionally: Work closely with Software Engineers, ML Engineers, and Product Managers to align data models and pipelines with product and business needs.
  • Scale for growth: Continuously improve performance, cost efficiency, and scalability of our data infrastructure as data volume and use cases expand.

Benefits

  • Very competitive salary and equity
  • 100% medical, dental & vision insurance coverage for you
  • Unlimited PTO
  • 401(k) for US-based employees
  • Extended maternity and paternity leave
  • Relocation support

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Education Level

No Education Listed

Number of Employees

11-50 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service