PySpark & Delta Lake Developer

i4DMMillersville, MD
1d

About The Position

Our employees thrive in a culture that's fast-paced and ego-free, where innovation and collaboration are encouraged at every turn. We are an organization that provides federal agencies instant access to experienced and talented professionals who understand their unique challenges and know the most efficient ways to address them. We are continually investing in resources and talent, so we stay prepared with specialized teams in the place who are experts in creating tailored technologies. Our solutions empower Federal organizations to grow, modernize, and succeed in a rapidly evolving landscape. We welcome diverse perspectives and seek individuals who are passionate about technology and creative problem-solving. If you enjoy learning, growing, and tackling real-world challenges, you will thrive here. Veterans and military spouses are strongly encouraged to apply and bring their unique experience to our team. About the Role: Our core values of People Matter, Integrity, and a Commitment to Excellence drive all that we do. By joining us, you will become a part of a fun and diverse team of talented and creative consultants who share the goal of using the latest technology to solve business challenges. We provide our clients with a dynamic mix of services and deliver focused solutions like no one else. We are seeking talented and bright team players who are passionate about technology and want to work in a fast-paced, dynamic, and ego-free culture while applying a creative approach to problem-solving. Team members who like to grow their skill sets while solving challenging, real world business problems thrive. We are looking for an experienced PySpark & Delta Lake Developer, who will be responsible for designing, building, and maintaining scalable ETL pipelines to process and analyze large-scale healthcare claims data. This role emphasizes building robust Delta Lake tables and ensuring ACID-compliant data lakes. The ideal candidate will focus on developing efficient PySpark scripts and leveraging Delta Lake capabilities to deliver data reliability, high performance, and seamless schema evolution within an AWS environment.

Requirements

  • Strong proficiency in Python and PySpark, with hands-on experience developing data pipelines.
  • Advanced experience with Delta Lake and its ACID transaction and schema management features.
  • Solid SQL skills for querying, joining, and optimizing data in distributed environments.
  • Hands-on experience with AWS cloud data services (e.g., S3, Glue, EMR, Athena).
  • Familiarity with data lake concepts, partitioning, and performance tuning.
  • Excellent communication skills and a desire to continuously learn and adapt to innovative technologies.
  • Familiarity with CI/CD, version control (e.g., Git), and infrastructure as code.

Nice To Haves

  • Experience with healthcare or claims data.
  • Knowledge of data governance, security, data cataloging (AWS Glue Catalog), and compliance best practices.
  • Strong ability to prioritize and execute tasks independently and within collaborative team environments.
  • Previous experience working in a government or public sector setting.

Responsibilities

  • Design, develop, and maintain robust ETL pipelines using PySpark and Delta Lake for large and complex healthcare data workloads.
  • Implement and optimize data lake solutions using Delta Lake table formats, supporting ACID transactions, schema enforcement, and time travel.
  • Write efficient, reusable, and well-documented PySpark scripts for data ingestion, transformation, cleansing, and aggregation.
  • Collaborate with data engineers, architects, and data scientists to understand business and data requirements and translate them into scalable data solutions.
  • Ensure data quality, consistency, lineage, and integrity across all stages of data processing.
  • Troubleshoot, debug, and optimize PySpark applications and Delta Lake workflows for cost, speed, and reliability within AWS.
  • Maintain detailed and up-to-date technical documentation of code, data pipelines, and standard operating procedures.
  • Stay updated with the latest Delta Lake and Spark advancements, advocating for best practices in data management and analytics.

Benefits

  • null

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Education Level

No Education Listed

Number of Employees

11-50 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service