About The Position

At ALSAC, you do more than make a living; you make a difference. As one of the world’s most iconic and respected nonprofits, we know what it’s like to stand out. We are looking for individuals whose background, perspective, and desire to make an impact set them apart. As we work to help St. Jude cure childhood cancer, we're calling on the game-changers, innovators and visionaries to join our family. The Data Engineer III (Information Architecture) designs, builds, and maintains the data infrastructure that powers analytics, reporting, and decision‑making across the organization. This role owns end‑to‑end data pipelines—from ingestion through modeling and delivery—and partners closely with analytics, product, and business stakeholders. You’ll have the opportunity to shape data platform standards, mentor other engineers, and deliver high‑impact solutions that directly support a mission‑driven organization. If you enjoy solving complex data problems at scale and want your work to make a meaningful difference, this role offers both technical challenge and purpose.

Requirements

  • Strong proficiency in SQL for development and analysis.
  • Hands‑on experience building and optimizing large‑scale data pipelines and architectures.
  • Solid understanding of dimensional data modeling and data warehousing concepts.
  • Experience sourcing and integrating complex data from multiple systems into a data lake or enterprise data platform.
  • Strong analytical and problem‑solving skills, including root cause analysis of data and processes.
  • Ability to design processes supporting data transformation, orchestration, dependency management, and metadata.
  • AWS data services including S3, Lambda, Glue, Athena, Redshift, RDS, EMR, EC2, and VPC configurations.
  • API and integration experience using tools such as AWS Lambda, Mulesoft, SSIS, or Confluent.
  • Workflow and orchestration tools such as Airflow, Luigi, or Azkaban.
  • Stream processing and big data technologies including Kafka, Spark, Spark Streaming, and Kafka Streams.
  • Relational and NoSQL databases such as SQL Server, MongoDB, and DynamoDB.
  • Experience with CI/CD and DevOps practices, including database source control and deployments (Liquibase, DB Up, SSDT).
  • Programming experience in Python, Java, Scala, or C++.
  • Security‑minded approach with experience building compliance‑aware applications.
  • Familiarity with data observability and metadata platforms supporting discovery, lineage, and stewardship.
  • Experience partnering with security, platform, and governance teams to support standards and adoption.
  • Acts as a technical lead and mentor for junior engineers.
  • Contributes to defining best practices, SOPs, and target‑state architecture.
  • Influences data engineering standards across the organization.
  • Bachelor’s degree in Engineering, Information Technology, or a related field plus 5–8 years of relevant experience.
  • Master’s degree preferred, especially with experience partnering directly with executive and cross‑functional stakeholders.

Responsibilities

  • Design, build, and maintain scalable, optimized data pipelines that support business intelligence and analytics use cases.
  • Partner with Enterprise, Solution, Technical Architecture, and Infrastructure teams to evolve data platform architecture.
  • Perform conceptual and logical data modeling to assemble data sets of varying size and complexity.
  • Develop and manage ETL/ELT processes using AWS and SQL-based technologies.
  • Implement analytical solutions that generate actionable insights into operational efficiency, donor acquisition, and key business metrics.
  • Collaborate with data, product, design, and executive stakeholders to support data needs and resolve technical issues.
  • Serve as a technical lead for Data Engineer I and II teammates by coaching, mentoring, and reinforcing best practices.
  • Conduct testing and root cause analysis to troubleshoot data quality, performance, and integration issues.
  • Provide structured and ad‑hoc knowledge sharing and documentation across teams.
  • Support data access configuration, integrations, and onboarding standards.
  • Partner with governance and security teams to enable data cataloging, lineage, and discovery (e.g., Alation or similar tools).
  • Assist with administration and configuration of data platform tools, including access management and integrations.

Benefits

  • low cost low deductible Medical, Dental, and Vision Insurance plans
  • 401K Retirement Plan with 7% Employer Contribution
  • Exceptional Paid Time Off
  • Maternity / Paternity Leave
  • Infertility Treatment Program
  • Adoption Assistance
  • Education Assistance
  • Enterprise Learning and Development
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service