About The Position

We are seeking a Data Engineer to join our Customer Service Engineering Services team at Ring & Blink. This role owns the data platform foundations that enable analytics, AI initiatives, and future data-driven capabilities across the organization. You will operate and evolve ETL pipelines and AWS data infrastructure, ensuring data is secure, reliable, compliant, and ready for downstream consumption. This role is hands-on and execution-focused, with strong emphasis on orchestration reliability, and pipeline quality. You will work closely with BI Engineers, Technical Architects, Salesforce System Developers, and platform partners to translate architecture into production-ready systems. The role requires a high degree of ownership and self-sufficiency. The ideal candidate is comfortable independently exploring existing systems, quickly diagnosing issues, and driving improvements from problem definition through production. You will be expected to move fast in a dynamic environment, proactively identify opportunities to improve platform reliability and maintainability, and remove technical roadblocks to keep delivery moving. Role Context At the time this role starts, the team will be completing a migration to a new, dedicated Redshift cluster, establishing full ownership of our customer support data platform. Core data models and pipeline designs will largely be defined, and the team will be in the execution and stabilization phase of the migration. The primary focus of this role is to stabilize, harden, and improve the data pipelines and orchestration layer, leading the effort to address accumulated technical debt and evolve the platform toward industry-standard, maintainable designs that support analytics, AI initiatives, and future data use cases.

Requirements

  • 5+ years of data engineering experience
  • Experience with data modeling, warehousing and building ETL pipelines
  • Experience with SQL
  • Experience in at least one modern scripting or programming language, such as Python, Java, Scala, or NodeJS
  • Experience with Airflow or similar orchestration frameworks
  • Experience owning IAM permissions and access control in multi-account AWS environments

Nice To Haves

  • Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions
  • 3+ years of working with Data & AI related technologies, including, but not limited to, AI/ML, GenAI, Analytics, Database, and/or Storage experience
  • Experience handling ambiguous or undefined challenges through strong problem solving abilities
  • Experience managing data pipelines

Responsibilities

  • Own and operate ETL pipelines and data infrastructure that provide data readiness for analytics, AI initiatives, and other downstream use cases.
  • Implement, maintain, and own data access and data-related permissions across AWS accounts, including IAM roles, trust relationships, and permissions for Athena, Redshift, and related services; debug and resolve IAM, Lake Formation, and cross-account data access issues, and coordinate with partner teams when broader AWS issues impact data pipelines.
  • Serve as the primary operational owner of the existing Airflow orchestration layer, focusing on execution monitoring, failure triage, alerting, SLA adherence, and pipeline reliability, while enabling BI Engineers to troubleshoot and contribute as needed.
  • Develop, operate, and support Python-based ETL jobs, including Airflow-executed Python tasks and jobs running on EC2 or other managed compute, ensuring reliability, observability, and maintainability through established engineering best practices.
  • Lead the identification and reduction of technical debt across ETL pipelines and orchestration, including evaluating job dependencies, brittle workflows, and re-run/backfill patterns, and driving incremental improvements in partnership with the team as the platform stabilizes.
  • Own data pipeline design considerations related to legal, security, and compliance requirements, including evaluating existing data retention and access-control implementations and redesigning or refactoring pipeline logic as needed to address gaps and ensure ongoing compliance with approved governance standards.
  • Partner with BI Engineers to ensure data outputs are consistent, validated, and ready for consumption, without owning business metrics or dashboards.
  • Create and maintain runbooks, operational documentation, and standards to support long-term scalability, knowledge transfer, and on-call readiness.
  • Proactively identify risks and improvement opportunities, and independently drive scoped initiatives from investigation through production, escalating architectural concerns when appropriate.

Benefits

  • medical
  • financial
  • other benefits
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service