Data Engineer | Home Services

Red VenturesCharlotte, NC
5d$80,000 - $120,000Hybrid

About The Position

This role is not open to visa sponsorship or transfer of visa sponsorship including those on H1-B, F-1, OPT, STEM-OPT, or TN visa, nor is it available to work corp-to-corp. This role requires a hybrid schedule and will be based in our Fort Mill, SC Headquarters (Tuesday through Thursday) and work fully remotely on Mondays and Fridays each week. As a Data Engineer at Red Ventures, you’ll build data products and create the foundation that powers our machine learning and business analytics efforts. You’ll work hand-in-hand with a variety of stakeholders from functional groups across the organization to create end-to-end solutions. Red Ventures is a high-autonomy, high-ownership environment; you’ll own your work from idea to production solution. Our data engineering tech stack is primarily AWS and Spark/SparkSQL/Python via Databricks, though we welcome strong applicants from a wide variety of technical backgrounds. We believe that diverse, inclusive teams are better teams. Think of the bullets below as guidelines: if you only partially meet the qualifications on this posting, we encourage you to apply anyway! Our Data Engineers within the Home vertical play a pivotal role in constructing the data processing pipelines that drive our proprietary brands and our key partnerships. As a member of this team, you will contribute to the development of a Homogenized Multi-Tenant Data Warehouse. Your primary responsibility will involve hands-on tasks to transform existing data from diverse enterprise systems into a unified data platform housing robust datasets ready for analytics and reporting. This entails mastering upstream processes, pipelines, and source systems. Moreover, you will collaborate with various functional units to ensure the successful deployment, operation, and maintenance of solutions.

Requirements

  • 2+ years of experience working with SQL
  • 2+ years of experience performing production data engineering/ETL work
  • 2+ years of experience with one of the major cloud providers (we use AWS but we welcome candidates with experience in Azure or GCP)
  • 2+ years of experience working on Spark/SparkSQL using Scala/Python to build and maintain complex ETL pipelines
  • Experience with GitHub and CI/CD processes
  • Experience working on Orchestration (Databricks Workflows / Airflow)
  • Experience with one of the major data warehousing solutions (we use Databricks but we welcome candidates with experience in BigQuery, Snowflake, Oracle or Redshift)
  • Conceptual understanding of data warehousing and dimensional modeling
  • Experience providing operational support for the production data pipelines and data triaging

Nice To Haves

  • Familiarity with SaaS like Fivetran and Hightouch is a plus

Responsibilities

  • Design and build data pipelines from various sources to data warehouse using batch or incremental loading strategies utilizing cutting edge cloud technologies.
  • Conceptualizing and generating infrastructure that allows data to be accessed and analyzed effectively.
  • Transform existing data from diverse enterprise systems into a unified data platform housing robust datasets ready for analytics and reporting.
  • Mastering upstream processes, pipelines, and source systems.
  • Collaborate with various functional units to ensure the successful deployment, operation, and maintenance of solutions.

Benefits

  • Health Insurance Coverage (medical, dental, and vision)
  • Life Insurance
  • Short and Long-Term Disability Insurance
  • Flexible Spending Accounts
  • Holiday Pay
  • 401(k) with match
  • Employee Assistance Program
  • Paid Parental Bonding Benefit Program
  • Flexible Paid Time Off (PTO): We believe time to rest and recharge is essential. That’s why we offer a generous and flexible PTO policy. Full-time employees accrue 20 days of PTO for a full calendar year annually, with an increase to 25 days after five years of service.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service