About The Position

Halvik Corp delivers a wide range of services to 13 executive agencies and 15 independent agencies. Halvik is a highly successful WOB business with more than 50 prime contracts and 500+ professionals delivering Digital Services, Advanced Analytics, Artificial Intelligence/Machine Learning, Cyber Security and Cutting-Edge Technology across the US Government. Be a part of something special! Architect the Data Pipelines of Tomorrow Join the data revolution where you'll design and build the infrastructure that transforms raw data into actionable insights! This position offers an incredible opportunity to work with big data technologies, cloud-scale processing systems, and modern ETL frameworks that power data-driven decision making. You'll learn to orchestrate complex data workflows while working with cutting-edge tools like Apache Spark, Kafka, Airflow, and cloud data platforms that handle petabytes of information. What You'll Learn & Experience - Design and implement robust ETL/ELT pipelines using modern frameworks - Work with big data technologies (Spark, Kafka) and streaming data - Master cloud data platforms (Snowflake, Databricks, Redshift, Synapse) - Develop scalable database solutions using Postgres and other RDBMS - Optimize database performance and design scalable data architectures - Implement data quality monitoring and automated testing frameworks

Requirements

  • Recent graduate with a degree in Computer Science, Data Science, Engineering, or related field
  • Must be a US citizen OR have lived continuously in the US for the past 3 years (US citizenship preferred)
  • Strong SQL skills and database experience (coursework, projects, or internships)
  • Demonstrated experience with ETL processes, data manipulation, or database management
  • Programming experience in Python, Java, Scala, or similar languages
  • Submit GRE or SAT scores with application
  • Portfolio showcasing data projects or database work

Nice To Haves

  • Experience with cloud platforms (AWS, Azure, GCP)
  • Familiarity with big data tools (Spark, Kafka, etc.)
  • Understanding of data warehousing concepts
  • Experience with version control and collaborative development

Responsibilities

  • Design and implement robust ETL/ELT pipelines using modern frameworks
  • Work with big data technologies (Spark, Kafka) and streaming data
  • Master cloud data platforms (Snowflake, Databricks, Redshift, Synapse)
  • Develop scalable database solutions using Postgres and other RDBMS
  • Optimize database performance and design scalable data architectures
  • Implement data quality monitoring and automated testing frameworks

Benefits

  • Company-supported medical, dental, vision, life, STD, and LTD insurance
  • Benefits include 11 federal holidays and PTO
  • Eligible employees may receive performance-based incentives in recognition of individual and/or team achievements.
  • 401(k) with company matching
  • Flexible Spending Accounts for commuter, medical, and dependent care expenses
  • Tuition Assistance
  • Charitable Contribution matching
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service