Data Engineer, Darden School of Business

UVA Health SystemNewcomb Hall, VA
4dHybrid

About The Position

The University of Virginia Darden School of Business, one of the world's leading business schools, seeks a highly skilled and motivated Data Engineer to join its Strategic IT Data & Analytics team. As a Data Engineer, the role will play a crucial role in building and maintaining our data infrastructure, ensuring the efficient extraction, loading and transformation (ELT) of data from various sources into our cloud data environments. The ideal candidate should have hands-on experience with ELT processes in Databricks. If you are passionate about data engineering, have strong analytical skills, a passion for higher education and enjoy working in a collaborative environment, we would love to hear from you. This position is hybrid and located in Charlottesville, VA, and is required to be in-person at least 2 days per week.

Requirements

  • High school diploma and at least 4 years of relevant experience required.
  • Education may be substituted in lieu of experience with an Associate's degree or Bachelor's degree and 2 years of relevant experience.
  • Bachelor's degree in computer science, MIS, engineering, business, or related field, with at least five years of relevant experience preferred.
  • Proven experience developing and maintaining data pipelines and ELT processes.
  • Strong proficiency with Databricks, Unity Catalog, and Delta Lake.
  • Expertise in SQL and Python for ingestion, transformation, and automation.
  • Experience with REST APIs and web services for data integration.
  • Strong understanding of data warehousing concepts and dimensional modeling.
  • Familiarity with CI/CD and DevOps practices for data engineering.
  • Strong analytical and problem-solving skills working with complex datasets.
  • Effective communication and collaboration skills.

Nice To Haves

  • Databricks Data Engineer Associate or Professional certification.
  • Experience integrating Databricks with enterprise governance platforms such as Microsoft Purview.
  • Coursework or certification in Databricks Machine Learning or AI Engineering.
  • Experience with MLflow, Feature Store, Model Serving, or AI workflow orchestration.
  • Experience with vector search, embeddings, or LLM-based application components.
  • Familiarity with machine learning operational practices and monitoring.
  • Experience with Microsoft Power BI or Microsoft Fabric.

Responsibilities

  • Design and implement scalable, metadata-driven data pipelines using cloud-based ELT frameworks to support enterprise analytics and reporting.
  • Collaborate with cross-functional teams to translate data requirements into Lakehouse-integrated solutions.
  • Build and maintain data models, integration workflows, and transformation processes supporting analytics and reporting.
  • Develop, orchestrate, and optimize data pipelines to ensure reliability, data quality, and performance.
  • Monitor and troubleshoot production pipelines to maintain availability, scalability, and operational excellence.
  • Identify and resolve performance bottlenecks, data quality issues, and compute inefficiencies.
  • Support implementation of data governance, security controls, and privacy requirements for regulated data environments.
  • Stay current with emerging data engineering and cloud technologies to enhance platform capabilities.
  • Extend ELT architectures to support machine learning workloads.
  • Build feature engineering pipelines using Databricks Feature Store, Delta Lake, or vectorization techniques.
  • Assist in deploying and monitoring machine learning models using MLflow or Databricks Model Serving.
  • Implement vector search, embedding pipelines, or retrieval-augmented generation (RAG) components.
  • Contribute to orchestrating AI workflows or agent-based pipelines using Lakehouse AI capabilities.
  • Develop CI/CD workflows for data and ML artifacts to ensure reproducibility and alignment with governance standards.
  • Troubleshoot and optimize model and feature pipelines.
  • Track emerging AI technologies and identify opportunities to apply them within the platform.
  • Integrate Databricks and Unity Catalog with enterprise data governance tools to ensure compliant access and lineage tracking.

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Number of Employees

5,001-10,000 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service