Sr Data Engineer

Range ResourcesFort Worth, TX
Hybrid

About The Position

The Senior Data Engineer designs, builds, and maintains the scalable cloud-based infrastructure required to collect, process, and store large volumes of data used for analytics and decision‑making. They develop and manage cloud-based data systems such as data lakes, data warehouses, and ETL/ELT pipelines, ensuring data flows reliably from diverse sources into centralized platforms. They are responsible for cleaning, transforming, and validating data to ensure high levels of quality, consistency, and security. Data engineers collaborate closely with data scientists, analysts, and business stakeholders to understand data requirements and deliver scalable, performance‑optimized solutions. They also monitor, troubleshoot, and continuously improve data infrastructure to support current and future business needs. May coach, review and delegate work to lower-level professionals, as needed.

Requirements

  • Minimum Education - BS -- Computer Information Technology, BS- Management Information Systems OR BBA -- Business Information Systems, or equivalent work experience.
  • Minimum Experience - 7+ years' experience preferred, experience designing, building, and operating production-grade data pipelines and platforms with expertise in data warehousing, cloud data services, and data modeling practices
  • Database Mastery: Expert-level SQL Server and T-SQL knowledge
  • Cloud Platforms: Extensive hands-on experience with the Azure data stack (Data Lake Gen2, Synapse, ADF) and Microsoft Fabric.
  • Advanced Warehousing: Proficiency in Snowflake (micro-partitioning, clustering, Snowpipe) and Databricks (Unity Catalog, Delta Lake).
  • Languages: Strong coding skills in Python and PySpark for complex data transformations.
  • DevOps & Automation: Experience with CI/CD pipelines via Azure DevOps or GitHub Actions

Nice To Haves

  • Professional Certifications - Fabric, Databricks, Snowflake preferred

Responsibilities

  • Cloud Migration Strategy: Design and execute end-to-end migration strategies to move on-premises SQL Server workloads to Azure, Fabric, Databricks or Snowflake.
  • Pipeline Development: Build and maintain scalable ETL/ELT pipelines using Azure Data Factory (ADF), Databricks (PySpark), and Microsoft Fabric.
  • Platform Integration: Manage cross-platform data flows, ensuring seamless integration between Snowflake/Databricks/Fabric warehouses and the on-premises SQL Server.
  • Architecture Design: Implement Medallion Architecture (Bronze/Silver/Gold) and Lakehouse patterns to support both batch and real-time analytics.
  • SQL Server: Build and support complex queries, performance tuning (indexes, query plans), partitioning, and large-volume ETL workloads
  • Regular and timely attendance
  • Deal professionally and respectfully with coworkers, management and others
  • Read, comprehend and follow applicable policies, procedures and directions
  • Work with others as part of a team to ensure efficient operations and enhanced productivity
  • Safeguard confidential information and disclose only to those in "need-to-know" positions
  • Safeguard and enhance Range's assets and business interests
  • Consistently perform all job duties at an acceptable level
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service