Anywhere-posted 3 months ago
1,001-5,000 employees

We're seeking a talented, creative, and motivated Lead Engineer who enjoys building data platform tools and is eager to collaborate with a team of individuals who share your passion. With a related degree under your belt--and relevant experience--you’re ready to take your programming and data knowledge to the next level. You enjoy building tools such as the Data Ingestion Service to ingest data from diverse sources into the Data Platform, as well as building tools for monitoring and observability. With your unwavering commitment to quality, excellent data/cloud skills, and collaborative work ethic, you’ll do great things here at Anywhere.

  • Work with other Data Engineers for the build-out of the Next Generation Data Platform.
  • Design and develop a Data Ingestion Service for real-time streaming of data from SQL Server, MySQL, and Oracle using CDC-based technologies.
  • Design and develop a Data Ingestion Service for real-time streaming of data from third-party APIs, internal micro-services, and files stored in S3/SFTP servers.
  • Work with the team to design and develop a Data Platform Storage Optimization & Delta Detection Service using Apache Iceberg.
  • Work with the team to design and develop a Data Catalog Service using Snowflake Horizon and Polaris.
  • Work with the team to design and develop Data Observability using DataDog and Data Recon to detect data anomalies.
  • Design and develop a CI/CD process for continuous delivery in AWS Cloud and Snowflake.
  • Design, develop, and test robust, scalable data platform components.
  • Bachelor in computer science, Engineering, or related technical discipline, or equivalent combination of training and experience.
  • 10+ years of programming experience: building application frameworks and back-end systems for high-volume pipelines using Java/Python.
  • 10+ years of experience building data frameworks and platforms, scaling them to handle large volumes of data.
  • 5+ years of experience building streaming platforms using Apache Kafka, Confluent Kafka, and AWS Managed Kafka Service.
  • 5+ years of experience ingesting data from SQL Server/MySQL/Oracle using Change Data Capture, Debezium, and Kafka Connect.
  • 5 years’ experience using AWS Data Services: DMS, EMR, Glue, Athena, S3, and Lambda.
  • 2 years’ experience building Platform Monitoring using DataDog.
  • 2 years’ experience building Data Observability using Monte Carlo.
  • 2 years’ experience building data solutions using Apache Iceberg and Apache Hudi.
  • 1-year experience with data architecture, ETL, and processing of structured and unstructured data.
  • 5 years’ experience with DevOps tools (any combination of GitLab, Bitbucket) and methodologies (Lean, Agile, Scrum, Test Driven Development).
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service