Senior Data Engineer (remote)

Knowledge ServicesIndianapolis, IN
1dRemote

About The Position

Knowledge Services is seeking a remote Senior Data Engineer for a 6-month contract (potential for extension). This role may work 100% remotely. Please note that we CANNOT CONSIDER ANYONE REQUIRING C2C or Sponsorship for a work visa Responsibilities Senior Data Engineer Overview: The Sr. Data Engineer will lead the design, develop, and optimize data pipelines across diverse sources. This role focuses on efficient data extraction, staging, and loading into our Snowflake-based data warehouse, ensuring high availability, accuracy, and performance. The ideal candidate will bring a technical foundation in modern data engineering practices, hands-on experience with Snowflake and tools like Fivetran, and a collaborative mindset.

Requirements

  • Minimum of 5 years’ experience in data engineering, with a strong focus on data extraction and cloud-based warehousing; a combination of years of experience and relevant advanced technology proficiency will also be considered.
  • Proficiency with Snowflake and data integration tools like Fivetran.
  • Advanced SQL skills and experience with ETL/ELT frameworks.
  • Experience with scripting languages such as Python for data processing and automation.
  • Solid understanding of data modeling and relational database design.
  • Strong communication skills and the ability to collaborate with technical and non-technical stakeholders.
  • Strong analytical and problem-solving skills, with the ability to identify and resolve complex data engineering challenges.

Nice To Haves

  • Bachelor's or Master’s degree in Computer Science, Information Systems, or a related field.
  • Snowflake Architect, Administrator, or Data Engineering certification required.
  • Experience with dbt (data build tool) for managing data transformations, modeling, and maintaining version- controlled, modular SQL pipelines.
  • Familiarity with cloud platforms such as AWS and Azure, including services like S3, Lambda, Redshift, Glue, Azure Data Lake, and Synapse.

Responsibilities

  • Develop efficient and scalable data extraction methodologies to retrieve data from diverse sources, such as databases, APIs, web scraping, flat files, and streaming platforms.
  • Design and implement robust data loading processes to efficiently ingest and integrate data into the latest data warehousing technology, ensuring data quality and consistency.
  • Develop and maintain staging processes to facilitate the organization and transformation of raw data into structured formats, preparing it for downstream analysis and reporting.
  • Implement data quality checks and validation processes to identify and address data anomalies, inconsistencies, and integrity issues.
  • Identify and resolve performance bottlenecks in data extraction and loading processes, optimizing overall system performance and data availability.
  • Ensure adherence to data security and privacy standards throughout the data extraction and warehousing processes, implementing appropriate access controls and encryption mechanisms.
  • Create and maintain comprehensive documentation of data extraction and warehousing processes, including data flow diagrams, data dictionaries, and process workflows.
  • Mentor and support junior data engineers, providing guidance on best practices, technical design, and professional development to elevate overall team capability and performance.
  • Collaborate with cross-functional teams, including data scientists, data analysts, software engineers, and business stakeholders, to understand their data requirements and provide efficient data engineering solutions.
  • Stay updated with the latest advancements in data engineering, data warehousing, and cloud technologies, and proactively propose innovative solutions to enhance data extraction and warehousing capabilities.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service