Data Engineer

FHI 360
7dRemote

About The Position

It's fun to work in a company where people truly BELIEVE in what they're doing! We're committed to bringing passion and customer focus to the business. Ready to take the next step in your career journey? We're committed to bringing passion and customer focus to the business. Ready to take the next step in your career journey? FHI is hiring a junior to mid-level Data Engineer fluent in SQL and Python to build, debug, and maintain production data pipelines and system integrations. You’ll own these pipelines end to end, partnering with the Business Intelligence team to deliver reliable data, actionable insights and process automation across the business. This role requires comfort reading, understanding, and improving existing systems, not just building greenfield solutions. We use a combination of SQL Server, Snowflake, Fivetran, and Python with a heavy emphasis on the Pandas library. Experience with these exact technologies is nice but not required. If you have a solid SQL and Python foundation you will be able to pick up our specific tools quickly.

Requirements

  • Strong SQL skills: able to encapsulate complex logic and messy data into simple, consistent models for analysts.
  • Practical experience with Python (Pandas nice to have).
  • Experience integrating external systems via inbound and outbound APIs.
  • Understanding of logging, error handling, and control flow required to operate production data pipelines.
  • Solid grasp of data architecture and modeling (normalized/denormalized, star/snowflake).
  • Experience using version control with a team, ideally Git.
  • Excellent analytical thinking, problem solving, and communication skills.
  • Proven ability to work independently, manage priorities, and deliver in a rapidly changing environment.
  • Bachelor’s in Computer Science, or equivalent experience.
  • Experience maintaining data pipelines and integrations across SQL Server/Snowflake or similar environments.
  • Experience supporting data ingestion from enterprise systems (i.e. Workday, other ERP or HR platforms) and delivering data for downstream reporting.

Nice To Haves

  • Experience with SQL Server, Snowflake, Fivetran, and Python with a heavy emphasis on the Pandas library.
  • Relevant certifications are nice to have but not required.
  • Experience with BI or reporting platforms. We primarily use PowerBI, SSRS, and Workday Reports, but if you have strong fundamentals matter more than specific tool experience.

Responsibilities

  • Build and maintain ETL pipelines that ingest and validate source-system data with minimal transformation.
  • Design and implement SQL transformation layers that translate that raw source-system data into analyst-ready models.
  • Build and maintain data integrations via inbound and outbound APIs.
  • Independently troubleshoot data failures, across the entire data pipeline.
  • Automate manual processes and improve data delivery and reliability.
  • Create clear documentation (ETL processes, object usage, data models) and test/validate code changes.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service