Senior Data Engineer

Above LendingChicago, IL
$110,000 - $150,000Hybrid

About The Position

Above Lending is a next-generation financial services company. We provide simple and transparent products aimed at helping our clients achieve their personal finance goals. With competitive rates and personalized support, our mission is to simplify the lending process and help borrowers attain financial well-being. We are committed to making credit more affordable and accessible. We're looking for a Senior Data Engineer who operates like an owner and thinks like a Systems Engineer. This is a deeply hands-on role. You'll spend the majority of your time writing SQL, building pipelines, and working directly with raw data. Beyond pure engineering, we expect some experience in database upkeep and maintenance, as well as the ability to analyze data - spotting patterns, questioning results, and understanding what the numbers actually mean. You'll take ambiguous problems, break them down, and deliver reliable, scalable solutions without step-by-step guidance. Strong candidates will be comfortable with messy source data, incomplete documentation, and figuring things out. If you need heavy direction or clean inputs to do your best work, this role likely isn't the right fit. This is a hybrid role based in Chicago, IL.

Requirements

  • 5+ years of hands-on data engineering experience - including administration or support.
  • Fintech or similarly data-intensive environments preferred.
  • Expert-level SQL: joins, window functions, performance trade-offs.
  • Proven experience building data models and marts from raw, imperfect source data.
  • Solid Python skills applied to practical data engineering problems.
  • Hands-on experience with Snowflake, Airflow, Fivetran, and DBT.
  • A high bar for data quality - you don't trust a number until you've validated it.
  • The ability to operate independently, make decisions under ambiguity, and follow problems through to resolution.

Nice To Haves

  • You've debugged data issues others couldn't explain.
  • You've worked with poorly documented or unreliable source systems and made them usable.
  • You've said "this number is wrong" - and proved it.
  • You care about correctness as much as performance.
  • You default to solving problems, not escalating them.

Responsibilities

  • Build and own end-to-end data pipelines (ingestion → staging → marts) using Snowflake, Airflow, Fivetran, and DBT.
  • Work directly with raw data. Identify issues and implement fixes in base and staging layers before they propagate downstream.
  • Design data models that are simple, scalable, and trusted by the business.
  • Write advanced, high-performance SQL.
  • Understand the underlying database technology: execution plans, indexing, clustering, and storage behavior.
  • Debug data issues deeply, tracing problems across systems.
  • Use Python for ETL automation, scripting.
  • Build and enforce data quality checks that prevent bad data from reaching consumers.
  • Monitor pipelines proactively and resolve issues.

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Education Level

No Education Listed

Number of Employees

11-50 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service