Data Engineer I (Base Camp)

Quorum SoftwareDallas, TX
10hHybrid

About The Position

We are looking for a Data Engineer I to join our Data Platform team and help build the foundational data infrastructure that powers analytics, reporting, and AI/ML capabilities across Quorum’s product portfolio. You will work alongside experienced data engineers and architects to design data models, build and maintain data pipelines, and ensure data quality across our platform. This is an entry-level role ideal for recent graduates or early-career professionals who are passionate about data engineering and eager to grow. You’ll gain hands-on experience with modern cloud data technologies while contributing to a strategic platform that serves 1,800+ energy companies worldwide. You will report to the Data Platform team manager and collaborate closely with product engineering teams, data architects, and data scientists.

Requirements

  • Bachelor’s degree in Computer Science, Data Science, Information Systems, Statistics, Industrial Engineering, or a related technical field
  • Strong proficiency in SQL, including the ability to write complex queries, joins, aggregations, and window functions
  • Programming experience in Python, with exposure to data processing libraries (e.g., Pandas, PySpark) preferred
  • Foundational understanding of data modeling concepts (relational, dimensional, star schema)
  • Familiarity with cloud platforms, preferably Microsoft Azure (Azure Data Factory, Azure SQL, Azure Data Lake Storage, or similar services)
  • Exposure to or coursework in ETL/ELT pipeline design and data integration concepts
  • Basic understanding of version control systems (Git) and collaborative development workflows
  • Strong analytical and problem-solving skills with attention to detail
  • Excellent communication skills (written and verbal) and ability to work effectively in a team environment
  • Eagerness to learn, take feedback, and grow in a fast-paced engineering organization

Nice To Haves

  • Experience with Databricks, Apache Spark, or Delta Lake (including coursework, internships, or personal projects)
  • Familiarity with data orchestration tools such as Apache Airflow, Azure Data Factory, or dbt
  • Exposure to big data concepts and distributed computing frameworks
  • Understanding of data governance principles, data lineage, and metadata management
  • Familiarity with AI/ML concepts and how data engineering supports machine learning workflows (e.g., feature engineering, training dataset preparation)
  • Internship or project experience in data engineering, analytics engineering, or business intelligence
  • Knowledge of the oil and gas industry or energy sector is a plus but not required
  • Experience with Agile/Scrum methodologies and tools such as Azure DevOps or Jira
  • Experience on, or ability to work with Microsoft Fabric & OneLake

Responsibilities

  • Build, test, and maintain ETL/ELT data pipelines that ingest, transform, and deliver data from multiple source systems into our centralized data platform
  • Develop and maintain dimensional data models (fact and dimension tables) following established patterns and standards set by the data architecture team
  • Write and optimize SQL queries and transformations for data processing workloads
  • Understanding and ability to build medallion architecture within a Data Lake
  • Implement and monitor data quality checks, validation rules, and alerting to ensure data accuracy and reliability
  • Work within our cloud data platform (Databricks, Azure Data Services, or similar) to build scalable, production-grade data solutions
  • Collaborate with product engineering teams to understand source system schemas, data flows, and business context across Quorum’s Upstream, Measurement, and Midstream product lines
  • Support the development and maintenance of data catalogs, documentation, and metadata to promote data discoverability and governance
  • Participate in code reviews, pair programming, and team retrospectives to continuously improve engineering practices
  • Troubleshoot data pipeline failures, investigate data anomalies, and implement fixes in a timely manner
  • Contribute to the team’s agile development processes including sprint planning, estimation, and daily standups
  • And other duties as assigned
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service