Jobgether-posted 2 months ago
Mid Level
11-50 employees

As a Backend Engineer on the Data Platform team, you will be instrumental in designing, building, and maintaining large-scale data pipelines and analytics infrastructure that power critical internal and external products. You will work with batch and stream processing systems to manage trillions of events per day, ensuring data reliability, scalability, and performance. This role provides a unique opportunity to collaborate with cross-functional teams, implement innovative solutions, and optimize complex data workflows. You will also actively contribute to monitoring, troubleshooting, and improving production systems while shaping the standards and best practices for backend data engineering. The position emphasizes hands-on engineering, creative problem-solving, and ownership of critical data products in a dynamic environment.

  • Build and expand data pipelines and products supporting experimentation, release observability, metrics, product analytics, and internal business intelligence.
  • Collaborate with frontend engineers, product managers, and UX designers to deliver user-facing data features.
  • Monitor, optimize, and maintain database and pipeline performance.
  • Write unit, integration, and load tests to ensure high-quality data delivery.
  • Participate in code reviews and provide feedback on technical proposals.
  • Contribute to improving engineering standards, tooling, and development processes.
  • Support production systems, including on-call rotations, to ensure reliability and performance.
  • 5+ years of backend software engineering experience.
  • At least 1 year of experience building data pipelines or data warehouse solutions.
  • Demonstrated expertise with pipeline technologies such as Kinesis, Airflow, Spark, Lambda, Flink, and Athena.
  • Experience working with event or analytical data in databases like Clickhouse, Postgres, ElasticSearch, Timestream, or Snowflake.
  • Strong foundation in computer science fundamentals, including data structures, distributed systems, concurrency, and threading.
  • Familiarity with Infrastructure-as-Code tools (e.g., Terraform) and observability tools (e.g., Datadog).
  • Commitment to writing maintainable, high-quality code and following engineering best practices.
  • Excellent communication skills and ability to collaborate in a team-oriented environment.
  • Competitive salary based on geographic location with transparent pay ranges.
  • Restricted Stock Units (RSUs) in addition to base salary.
  • Comprehensive health, dental, and vision insurance.
  • Mental health benefits and wellness programs.
  • Remote-friendly work environment with support for flexible arrangements.
  • Opportunities for professional growth and working with cutting-edge data technologies.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service