Data Engineer

Podimetrics
Remote

About The Position

Founded in 2011, Podimetrics set out on a mission to improve patient lives through early detection and prevention of diabetic foot ulcers, the leading cause of lower limb amputations. Podimetrics has since evolved to become a rapidly growing virtual care management company with advanced technology and patient-centered services. Today, Podimetrics teams with and provides solutions to patients, payers and providers that alleviate the health and financial implications of diabetic foot complications in high-risk populations. We are a mission driven, financially responsible enterprise that enables patients to stand on their own feet and live more independent and fulfilling lives. We are looking for an experienced Data Engineer with a strong bias to action, natural curiosity, and the ability to work independently while partnering effectively across teams. As part of the Data organization, you will play a central role in building and maintaining the systems, pipelines, and models that power analytics, machine learning, operational workflows, and strategic decision-making at Podimetrics. Your work directly supports our mission: improving patient lives through early detection and prevention of diabetic foot complications. You will own critical components of our modern data stack - Google BigQuery, cloud infrastructure, dbt, and Python-based pipelines - ensuring our data is accurate, timely, reliable, and accessible. This position will own critical components of our modern data stack: Google BigQuery, dbt, and Python-based pipelines. This position will play an active role as we evolve toward an event-driven, pub/sub-based platform architecture. This is not a specialist role: there are elements of data engineering, software development, and data science, and this role is expected to move fluidly across each domain. This role will leverage AI tools actively as a force multiplier, write production-quality code, and can reason about system architecture, not just pipelines.

Requirements

  • 8+ years of experience in data engineering or a related technical role.
  • Strong SQL skills and experience working with cloud-based warehouses (BigQuery strongly preferred).
  • Strong software engineering fundamentals in Python - you write production-quality, maintainable code, not just scripts
  • Hands-on experience with dbt - build, test, deploy, document models
  • Experience with Git/GitHub and modern CI/CD practices for data.
  • Experience with structured logging, monitoring, and observability tools.
  • Experience with CI/CD for data (GitHub Actions, Cloud Build, Cloud Run, etc.)
  • Experience working with or adjacent to ML/data science workflows - feature stores, training pipelines, or analytical modeling
  • Demonstrated use of AI/LLM tools in an engineering workflow (Claude Code, Cursor, Copilot, etc.)
  • Experience with event-driven architectures and message bus systems (GCP Pub/Sub, Kafka, or equivalent), and clear understanding of stream processing, event schemas, and consumer patterns

Responsibilities

  • Design, build, and maintain critical infrastructure, including but not limited to enterprise data warehouse structures and pipelines in BigQuery.
  • Develop and maintain dbt models, tests, and documentation to standardize and transform data in line with our data strategy.
  • Ensure consistent, reliable data structures by owning data modeling standards, documentation, and best practices.
  • Monitor, troubleshoot, and improve existing pipelines with an emphasis on automation, maintainability, and data quality.
  • Partner with analysts, data scientists, software engineers, and business stakeholders to design datasets that are high quality, well-modeled, and optimized.
  • Design and implement data ingestion patterns from event streams, translating operational events into analytical structures in BigQuery.
  • Implement best practices for data testing, data lineage, and data governance within dbt and downstream tools.
  • Investigate and resolve data quality issues by identifying root causes and delivering sustainable, technically sound solutions.
  • Proactively identify opportunities to improve data reliability, scalability, and usability across the organization.
  • Use AI coding tools (LLMs, code generation, etc.) to accelerate development and set the standard for AI-native engineering practices on the team

Benefits

  • Base Pay: $130-$150k base salary commensurate with experience
  • Annual Bonus Opportunity
  • Equity Options
  • Flexible Paid Time Off
  • Paid Sick Leave
  • Paid Parental Leave
  • Competitive Medical, Dental, and Vision plans – Podimetrics covers 80% of premiums.
  • Health Savings Account with employer contribution
  • Employee Assistance Program - Free, confidential advice for team members who need help with stress, anxiety, financial planning, and legal issues.
  • 401k
  • Life Insurance - Podimetrics pays 100% of the cost of Basic Life & Personal Accident
  • Disability insurance – Podimetrics pays 100% of the cost of Short-Term and Long-Term Disability Insurance
  • Additional life insurance, critical illness, and accident coverage are available

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Senior

Education Level

No Education Listed

Number of Employees

11-50 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service