Associate Data Engineer

Cast & CrewBurbank, CA
1d$90,000 - $115,000

About The Position

We are looking for a Data Engineer to help design, build, and maintain data pipelines and ETL processes that power our payroll application platform. You will work with modern tools like Apache Kafka, Apache Spark, and Snowflake, collaborating with engineering and product teams to deliver reliable data solutions. This role offers an opportunity to learn data engineering, support the design, development, and maintenance of data pipelines and ETL processes that power our payroll application platform.

Requirements

  • Bachelor's degree in Computer Science, Data Science, Information Systems, Engineering, or related field (or equivalent combination of education and experience)
  • Basic programming skills in Python, node.js, Scala, or similar languages
  • Foundational understanding of SQL and relational databases
  • Familiarity with data structures, algorithms, and software engineering concepts
  • Interest in learning big data technologies and distributed systems
  • Strong analytical thinking and problem-solving abilities
  • Excellent communication skills and ability to work in a team environment
  • Detail-oriented with commitment to data quality and accuracy
  • Eagerness to learn new technologies and adapt to changing requirements

Nice To Haves

  • Exposure to Apache Kafka, Apache Spark for large-scale data processing
  • Experience with ETL tools or frameworks (Airflow, Luigi, dbt, Talend, Informatica)
  • Knowledge of data pipeline orchestration and workflow management
  • Coursework or projects involving data warehousing or data lakes
  • Understanding of cloud platforms (AWS, Azure, GCP) and their data services
  • Familiarity with containerization (Docker) and orchestration (Kubernetes)
  • Experience with data modeling and schema design
  • Knowledge of NoSQL databases (MongoDB, Cassandra, DynamoDB)
  • Exposure to CI/CD practices and DevOps principles
  • Understanding of data governance and compliance concepts

Responsibilities

  • Assist in building and maintaining data pipelines and support the development of ETL (Extract, Transform, Load) processes
  • Help implement data ingestion workflows using tools like Apache Kafka
  • Participate in code reviews, data validation and testing activities
  • Write clean, maintainable code following team standards and conventions
  • Support integration of payroll data and accounting systems data into Snowflake data lake
  • Assist in mapping data fields between different systems and ensuring data quality
  • Help monitor data pipelines for issues and support troubleshooting efforts
  • Document data flows, transformations, and pipeline architectures
  • Partner with product to translate business requirements into technical data solutions
  • Participate in agile ceremonies including sprint planning, daily standups, backlog grooming, and retrospectives
  • Support data requests from analytics, finance, and business intelligence teams
  • Assist in implementing data quality checks and validation rules
  • Help create and maintain automated tests and clear documentation for data pipelines, schemas, and processes
  • Support monitoring and alerting systems for data pipeline health
  • Participate in root cause analysis when data issues occur

Benefits

  • Medical
  • Dental
  • Vision
  • PTO
  • health and wellness programs
  • employee discounts
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service