Data Solutions Engineer

PowerSecure, Inc.
Remote

About The Position

The Data Engineer designs, builds, and operates reliable, scalable data pipelines on PowerSecure’s Azure data platform to enable trusted analytics and self‑service BI. This role partners with BI Developers, the Data Architect, and business stakeholders to ingest, model, and deliver high‑quality data using Azure Data Factory, Databricks (PySpark/Spark SQL), and Delta Lake (including Delta Live Tables). Responsibilities include batch/stream ingestion, data transformation (ELT), orchestration, data quality enforcement, and operational support.

Requirements

  • 2+ years of hands‑on data engineering (or comparable software engineering with significant data work)
  • 2+ years building pipelines on Azure and Databricks (or equivalent cloud + Spark)
  • Strong SQL (analytical queries, window functions), PySpark/Spark SQL, and data modeling fundamentals
  • Bachelor’s degree in MIS, Computer Science, Engineering, or equivalent experience
  • Proficiency with SQL and Python (PySpark), including performance tuning on large datasets
  • Experience with Azure Databricks, Delta Lake, Delta Live Tables (DLT), Azure Data Factory (or Fabric Data Pipelines), ADLS Gen2, and Azure DevOps/Git for CI/CD
  • Working knowledge of Unity Catalog and/or Microsoft Purview for governance, lineage, and security
  • Familiarity with data ingestion patterns (files, APIs, JDBC), schema evolution, CDC, and deletion detection patterns
  • Understanding of dimensional modeling to produce analytics‑ready datasets for Power BI
  • Exposure to orchestration/monitoring, cost optimization, alerting, and runbook‑driven operations
  • Data pipeline design (batch & streaming), DLT expectations for data quality, and robust error handling
  • Source control, branching strategies, and CI/CD for data assets (notebooks, jobs, workflows)
  • Practical understanding of privacy, security, and RBAC in cloud data platforms
  • Excellent communication, documentation, and cross‑functional collaboration skills
  • Analytical mindset; bias toward automation and measurable reliability
  • Applicants must be legally authorized to work in the United States and should not require now, or in the future, sponsorship for employment visa sponsorship.

Responsibilities

  • Design, implement, and support end‑to‑end ELT pipelines (ingest → transform → publish) in Databricks/ADF
  • Implement data quality checks (DLT expectations, unit tests) with alerting and remediation runbooks
  • Build curated, analytics‑ready Delta tables using dimensional modeling for consumption by BI Developers
  • Implement CDC and deletion‑flag patterns; manage schema drift and partitioning/Z‑Ordering strategies
  • Operationalize jobs with monitoring, logging, alerting; participate in an on‑call rotation as needed
  • Partner with the Data Architect to align designs with standards for governance, security, and cost efficiency
  • Document pipelines, data contracts, and SLAs; continuously improve performance and reliability

Benefits

  • Medical, dental, vision and life insurance coverage
  • Competitive pay and a matching 401 (k) plan
  • Vacation, Company Holidays, Paid Time Off (PTO- personal and sick days)
  • Flexible spending accounts/Health savings account
  • Wellness Incentive Programs
  • Employee Referral Program
  • Tuition Reimbursement
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service