Sr. Data Engineer

HalvikVienna, VA

About The Position

Halvik Corp delivers a wide range of services to 13 executive agencies and 15 independent agencies. Halvik is a highly successful WOB business with more than 50 prime contracts and 500+ professionals delivering Digital Services, Advanced Analytics, Artificial Intelligence/Machine Learning, Cyber Security and Cutting-Edge Technology across the US Government. Be a part of something special! Role Summary The Senior Data Engineer will be a key contributor within the Data Products Delivery Area, responsible for designing, building, and operating scalable, secure, and high-quality data pipelines that power Business Intelligence & Analytics and AIML products. This role supports a Databricks Data Lakehouse using Medallion Architecture, built on Data Fabric principles, and operates within a release-based Agile framework. This role requires strong hands-on development using Python, PySpark, SQL, and APIs, and experience integrating diverse enterprise and external data sources.

Requirements

  • Strong hands-on experience with: o Python for data engineering and API integration o PySpark / Apache Spark for large-scale data processing o SQL (advanced querying and performance tuning)
  • Proven experience building API-driven data ingestion pipelines (REST, JSON, OAuth, pagination, throttling).
  • Strong experience with Databricks, Delta Lake, and Lakehouse Architecture.
  • Experience implementing Medallion Architecture in production environments.
  • Experience working in cloud platforms (AWS, Azure, or GCP).
  • Familiarity with: o CI/CD pipelines for Python and Spark workloads o Infrastructure as Code concepts o Secure API authentication and secrets management
  • Cost-conscious engineering mindset (FinOps exposure is a plus).
  • Experience supporting BI&A workloads at scale.
  • Understanding of data modeling for analytics and reporting.
  • Experience enabling data consumption for AI/ML use cases.
  • Proven experience working in Agile Scrum / Scrum of Scrums environments.
  • Comfortable working with release-based delivery and production support.
  • Strong analytical and problem-solving skills.
  • Excellent communication and collaboration abilities.
  • Ownership mindset with a focus on reliability and quality.

Nice To Haves

  • Experience exposing data via internal or external APIs.
  • Familiarity with streaming technologies (Kinesis, Firehose, Kafka, Event Hubs, etc.).
  • Experience with data governance, cataloging, and lineage tools.
  • Exposure to feature stores or ML data pipelines.

Responsibilities

  • Data Engineering & Architecture · Design, develop, and optimize end-to-end data pipelines using Python and PySpark.
  • Build and maintain data ingestion frameworks leveraging REST APIs, streaming APIs, and batch interfaces.
  • Implement Medallion Architecture (Bronze, Silver, Gold) within a Databricks Lakehouse.
  • Integrate data from structured, semi-structured, and unstructured sources, including API-based and event-driven sources.
  • Apply Data Fabric principles including metadata-driven ingestion, lineage, observability, and reusability.
  • Collaborate with Senior Data Architects on logical and physical data models.
  • Data Products & Delivery · Deliver data assets as releasable products
  • Support downstream consumers including: o BI&A dashboards and visualizations o AIML feature engineering, training, and inference pipelines o Data services and curated datasets exposed via APIs
  • Decompose requirements into well-defined user stories aligned with MoSCoW prioritization.
  • Data Quality, Governance & Security · Implement data validation, reconciliation, and quality checks within PySpark pipelines.
  • Ensure secure handling of PII and sensitive data, including encryption, masking, and access controls.
  • Partner with CloudOps (DevSecOps, FinOps, InfraOps) to ensure: o Secure API access and secrets management o CI/CD automation for Python and PySpark workloads o Cost-optimized compute and storage usage
  • Production Support & Optimization · Support O&M for production data pipelines, APIs, and analytics products.
  • Tune Spark jobs, SQL queries, and API integrations for performance and reliability.
  • Implement monitoring, logging, and alerting for data workflows.
  • Drive automation and refactoring to improve resiliency and scalability.
  • Collaboration & Leadership · Mentor junior and mid-level data engineers on Python, PySpark, and API integration best practices.
  • Work closely with: o Business Analysts & Data Architects o Data Scientists & ML Engineers o BI Developers o Automation Test Engineers
  • Contribute to reusable frameworks, coding standards, and engineering best practices.

Benefits

  • Halvik offers a competitive full benefits package including: Company-supported medical, dental, vision, life, STD, and LTD insurance
  • Benefits include 11 federal holidays and PTO
  • Eligible employees may receive performance-based incentives in recognition of individual and/or team achievements.
  • 401(k) with company matching
  • Flexible Spending Accounts for commuter, medical, and dependent care expenses
  • Tuition Assistance
  • Charitable Contribution matching

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Education Level

No Education Listed

Number of Employees

101-250 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service