Senior Data Engineer

MetropolisNew York, NY
$160,000 - $220,000Onsite

About The Position

The real world is the next frontier, and at Metropolis, we are creating the artificial intelligence to make it responsive. We are pioneering the Recognition Economy — a future where mundane repetition disappears and being known unlocks access, comfort, and belonging everywhere you go. From transforming parking into a seamless drive-in, drive-out experience for millions of Members to expanding our intelligence layer across retail and hospitality, we are building a world that feels instinctive and magical. The future isn't coming; it's here, and we need builders, innovators, and problem solvers to help us create it. As a Senior Data Engineer at Metropolis, you will play a key role in shaping data products that align with our mission. Your technical expertise and analytical acumen will contribute to designing and building extensive data sets that impact thousands of internal users and millions of members. Join a world-class data engineering team dedicated to advancing your skills and career in data engineering and beyond.

Requirements

  • Bachelor's degree in Computer Science, Computer Engineering, or a relevant technical field
  • 5+ years of experience in data engineering, database engineering, business intelligence, data warehousing, and ETL tools, working with large data sets in the cloud
  • 5+ years of experience with Python, and experience building scalable Big Data solutions and ETL ecosystems
  • Proficiency in SQL, ETL/ELT, and data modeling, with extensive experience in Snowflake and dbt
  • Hands-on experience with RDBMS such as MySQL, MS SQL Server (optional), and Postgres
  • Familiarity with integration tools like Airflow and Automic, along with a working knowledge of CI/CD pipelines
  • Ability to deliver high standards of code quality, system reliability, and performance
  • Deep understanding of the modern IaC ecosystem to drive automated infrastructure deployments
  • Experience with cloud computing services, preferably AWS, including hands-on experience with services like Glue, Airflow (MWAA), DMS, EKS, and SNS

Nice To Haves

  • Experience with Spark and PySpark
  • Master's degree in Computer Science, Computer Engineering, or a relevant technical field

Responsibilities

  • Collaborate with cross-functional teams to develop end-to-end data pipelines and foundational data sets
  • Design and own data architecture for large-scale projects while managing operational trade-offs
  • Build and optimize sophisticated data pipelines, models, and visualizations across multiple domains
  • Define and manage Service Level Agreements (SLAs) for all owned data sets
  • Implement data security models, ensure privacy compliance, and evolve data governance processes
  • Solve complex integration challenges using optimal ETL/ELT patterns for structured and unstructured data
  • Maintain production processes by optimizing complex code using advanced algorithmic concepts
  • Streamline data artifact development by optimizing pipelines, dashboards, and frameworks
  • Mentor team members through actionable feedback to foster collective skill growth
  • Maintain flexibility with availability during evening hours

Benefits

  • healthcare benefits
  • a 401(k) plan
  • short-term and long-term disability coverage
  • basic life insurance
  • a lucrative stock option plan
  • bonus plans
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service