Stride Build-posted about 24 hours ago
Full-time • Mid Level
51-100 employees

We are seeking a Lead Data Engineer to design, build, and scale the data platforms that power analytics, reporting, and advanced data initiatives. This role bridges raw data and actionable insights by ensuring data pipelines are reliable, scalable, secure, and optimized for performance. This is a hands-on, client-facing role in a consulting environment where you will both build and lead, working closely with data scientists, analysts, product teams, and client stakeholders.

  • Lead the design, development, and maintenance of scalable data pipelines and ETL/ELT processes.
  • Architect and evolve cloud-based data platforms that support analytics, business intelligence, and machine learning use cases.
  • Ensure data quality, integrity, availability, and consistency across systems.
  • Partner with stakeholders to translate business and analytical needs into robust data solutions.
  • Drive best practices for data modeling, performance optimization, and cost efficiency.
  • Design and manage data solutions using MySQL, MongoDB, and Snowflake.
  • Build batch and event-driven data pipelines leveraging AWS services such as S3, SQS, SES, SSM, MSK, STS, and Rekognition.
  • Optimize data workflows for scalability, reliability, and performance in cloud environments.
  • Partner with platform and DevOps teams to ensure secure, observable, and production-ready data systems.
  • Design and integrate cloud-based services and messaging systems to support distributed architectures.
  • Build and support data integrations and pipelines across relational, NoSQL, and analytics platforms.
  • Ensure applications are secure, observable, and production-ready.
  • Partner with DevOps and platform teams to support deployments, monitoring, and reliability.
  • Serve as a technical leader across data initiatives, guiding architecture and implementation decisions.
  • Mentor Data Engineers through code reviews, design feedback, and knowledge sharing.
  • Collaborate with data scientists and analysts to ensure data is structured, discoverable, and usable.
  • Lead technical discovery and feasibility assessments for new data use cases and client engagements.
  • Contribute to internal data engineering standards, patterns, and documentation.
  • Be a strong team player who is equal parts mentor, coach, enabler, doer, and advisor for our teams and clients.
  • Establish and enforce best practices for data governance, security, privacy, and compliance.
  • Implement testing, validation, and monitoring processes to ensure data accuracy and reliability.
  • Monitor and troubleshoot data pipeline issues to minimize downtime and disruption.
  • Document data models, workflows, and architecture to ensure transparency and knowledge sharing.
  • Identify and implement automation opportunities to streamline data operations.
  • Work directly with client stakeholders to understand data needs, constraints, and priorities.
  • Communicate data architecture decisions, tradeoffs, and risks clearly and effectively.
  • Support analytics demos, reporting reviews, and data-driven decision-making discussions.
  • Act as a trusted data advisor in client-facing environments.
  • 6+ years of experience as a Data Engineer or in a similar role.
  • Strong expertise in SQL, data modeling, and data warehousing concepts.
  • Experience working with Apache Iceberg
  • Hands-on experience with MySQL, MongoDB, and Snowflake.
  • Proven experience building and operating data pipelines in AWS-based environments.
  • Experience with ETL/ELT frameworks and orchestration tools such as Airflow or dbt.
  • Strong proficiency in Python for building and maintaining scalable data pipelines, data transformations, and automation across modern cloud and analytics platforms.
  • Strong understanding of data security, privacy, and governance practices.
  • Must be comfortable jumping into unclear data environments/concepts (i.e. roll up sleeves and get messy)
  • Experience working in Agile delivery environments.
  • Excellent communication skills and comfort working in client-facing roles.
  • Experience with real-time or event-driven data processing using Kafka or MSK.
  • Familiarity with machine learning pipelines and analytics enablement.
  • Experience with containerization or orchestration tools such as Docker or Kubernetes.
  • Exposure to data cataloging, lineage, or observability tools.
  • Cloud or data engineering certifications.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service