Curology-posted 2 days ago
$170,000 - $190,000/Yr
Full-time • Mid Level
Remote
251-500 employees

Curology’s mission is to make effective, personalized skincare accessible. We were founded by dermatologists who believe everyone should have access to skincare products that actually work. Today, our licensed dermatology providers have helped millions of patients across all 50 states make that mission a reality. We combine expert medical care with personalized prescription formulas and dermatologist-developed skincare essentials to deliver science-backed solutions that meet people where they are. Join us in our mission to transform skin health and enhance lives—one patient at a time. Mission of the Role: The mission of the Senior Data Engineer is to own and evolve the data systems that power analytics, experimentation, and operational decision-making across the business. Reporting to the Director, Data & Analytics, the Senior Data Engineer will build reliable, scalable, cloud-native data infrastructure, partner closely with engineering and business teams, and help establish best practices that enable high-quality, privacy-safe data at scale. This role is ideal for an experienced, hands-on engineer who thrives in production environments and enjoys turning complex requirements into durable data solutions.

  • Own the end-to-end design, build, and operation of core data infrastructure that delivers trusted, timely data for analytics, experimentation, and decision-making.
  • Within the first six months, lead the rebuild and stabilization of core data pipelines, establishing a reliable, well-documented foundation that enables accurate, scalable reporting and supports future analytics and experimentation needs.
  • Build and operate data pipelines using our modern data engineering stack, including Hevo, Fivetran, dbt, Snowflake, Airflow, AWS (S3, Data Lake, Glue), Paradime, Monte Carlo, Hex, and AI-enabled tools such as ChatGPT, Claude, and SageMaker.
  • Act as a senior technical contributor on the data engineering team, establishing best practices for data modeling, testing, observability, and production readiness.
  • Partner cross-functionally with Engineering, Product, Marketing, and Operations to translate business needs into durable, automated data solutions.
  • Improve developer and analyst productivity by reducing friction, standardizing tooling, and investing in self-service data capabilities.
  • Drive continuous improvement of metrics, measurement, and experimentation systems to support insight generation and rapid iteration.
  • Ensure all data systems are designed and operated with privacy, security, and regulatory compliance as foundational requirements.
  • Support high-priority business initiatives by delivering accurate data quickly while maintaining platform stability and long-term scalability.
  • 5–8 years of professional experience building and operating production data systems, with strong hands-on expertise in Python.
  • Ability to write clean, idiomatic, and maintainable Python, including well-structured, reusable, and testable code.
  • Strong foundation in software engineering best practices, including code reviews, documentation, testing, and CI/CD.
  • 6+ years of experience designing and modeling data in relational and non-relational databases, with a clear understanding of performance and scalability tradeoffs.
  • Proven ability to translate analytical or business problems into practical data models and pipelines, clearly articulating design decisions.
  • Experience working with modern data warehouses, including dimensional modeling, ELT workflows, and query optimization.
  • Hands-on experience building, scheduling, and monitoring batch data pipelines using Airflow or comparable orchestration tools.
  • Strong understanding of data architecture fundamentals, including efficient storage, retrieval, and compute usage in cloud-based systems.
  • Practical experience with AWS, including core data and infrastructure services.
  • Production experience with Snowflake, including schema design, performance tuning, and cost-aware usage.
  • Experience working with sensitive or regulated data, and familiarity with compliance requirements such as HIPAA, GDPR, or CCPA.
  • Ability to independently own and deliver well-scoped data engineering projects with minimal supervision.
  • Comfortable supporting high-priority data requests and operational issues while maintaining code quality and system reliability.
  • Proficiency with core technologies such as Python, SQL (MySQL/Snowflake), Airflow, AWS, and Terraform.
  • Experience with distributed data processing tools such as Apache Spark.
  • Familiarity with serverless architectures (e.g., AWS Lambda) for data ingestion or transformation use cases.
  • Experience collaborating with Marketing or Growth teams, including exposure to paid channel data and automation workflows.
  • Background working in consumer, e-commerce, or DTC environments.
  • 💰 Competitive compensation and equity package (RSUs)
  • 🥼 Comprehensive benefits: Medical, dental, vision, FSA and HSA, supplemental coverages (critical illness, accident, hospitalization), and 401(k)
  • 🧘🏻‍♀️ Access to wellbeing perks, including OneMedical, Spring Health, SoFi, and Employee Assistance Program
  • 🌴 Flexible paid time off and holiday policy
  • 🐣 Paid parental leave (birthing and non-birthing parents)
  • 💜 Employee donation matching program
  • 🫱🏻‍🫲🏽 Culture Committee and employee resource groups for virtual and in-person connectivity
  • ✨ Complimentary VIP Subscription to Curology or Agency, plus online retail discount
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service