Data Engineer

UNITED NETWORK FOR ORGAN SHARINGRichmond, VA
9d

About The Position

We are seeking a Data Engineer with proven hands-on experience with the design and development of highly available data management platforms and solutions for both on-prem and Azure-based cloud environments. This hands-on, technical role will partner closely with data scientists, analysts, and business stakeholders to ensure that data is accurate, accessible, and optimized for analytics, reporting, and operational needs. This resource will play a key role in building scalable data pipelines and integrations while championing best practices, and leading strategic data initiatives across teams.

Requirements

  • 5 years of experience in data engineering, including 1+ years with Azure cloud-native solutions
  • 3+ years with SQL Server Integration Services (SSIS)
  • Expertise in MS SQL Server, SSIS, Python (pandas, PySpark), Azure Data Factory, Azure Functions and Azure Data Lake Storage.
  • Experience working with a variety of file formats (e.g., CSV, JSON, XML, Parquet).
  • Familiarity using REST APIs for data extraction and integration.
  • Proven experience designing and implementing data solutions.
  • Good understanding of cloud architecture, data warehousing and modern data stack components.

Nice To Haves

  • Demonstrated ability to perform root cause analysis on data and processing issues
  • Strong problem-solving skills with the ability to explain technical concepts to non-technical audiences
  • A successful history of manipulating, processing and extracting value from large disparate datasets
  • Experience with Big Data technologies such as Databricks, Spark, or Azure Synapse
  • Knowledge of CI/CD workflows, version control, and agile development practices
  • Familiarity with data governance, privacy, and compliance frameworks
  • Experience with data warehousing, analytics tools, and BI platforms

Responsibilities

  • Develop and implement secure, scalable data pipelines using Azure Data Factory, Azure Functions, and Azure Data Lake Storage
  • Design and build reliable ETL/ELT processes using Python (pandas, PySpark), SQL, and SSIS, integrating data from diverse sources including REST APIs and various file formats (CSV, JSON, XML, Parquet)
  • Collaborate cross-functionally to gather requirements and translate business needs into technical deliverables
  • Detect and resolve data quality issues; implement automated audits and monitoring processes
  • Participate in schema design and performance tuning
  • Mentor and support junior engineers across data engineering, analytics, and BI teams
  • Participate in code reviews, promoting clean, well-documented, and testable code
  • Stay current on trends in data engineering and cloud technologies, identifying opportunities to innovate
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service