About The Position

Design and build scalable data platform components for batch and real -time pipelines. Own features end to end from design through production support. Develop and maintain high -availability, production -grade data services. Contribute to system architecture and technical design decisions. Improve reliability, performance, and CI/CD practices. Troubleshoot and resolve complex data pipeline issues. Proven experience building scalable, distributed data platforms. Strong understanding of system design and backend architecture. Ability to work independently after onboarding and deliver production features. STEM bachelor’s degree plus 5 years of relevant experience.

Requirements

  • 5+ years of backend or data engineering experience
  • Strong hands -on Scala with Apache Spark (not PySpark)
  • Experience with big data systems: Spark, Kafka, Airflow
  • AWS data services and distributed systems
  • Data modeling and performance optimization
  • Proven experience building scalable, distributed data platforms
  • Strong understanding of system design and backend architecture
  • Ability to work independently after onboarding and deliver production features
  • STEM bachelor’s degree plus 5 years of relevant experience

Nice To Haves

  • Data bricks development and deployment experience
  • Snowflake, Kinesis, or Lambda exposure
  • Infrastructure tooling such as Terraform, Kubernetes, or IAM
  • Microservices frameworks such as Spring Boot or FastAPI

Responsibilities

  • Design and build scalable data platform components for batch and real -time pipelines
  • Own features end to end from design through production support
  • Develop and maintain high -availability, production -grade data services
  • Contribute to system architecture and technical design decisions
  • Improve reliability, performance, and CI/CD practices
  • Troubleshoot and resolve complex data pipeline issues
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service