About The Position

At ClickUp, we’re not just building software. We’re architecting the future of work! In a world overwhelmed by work sprawl, we saw a better way. That’s why we created the first truly converged AI workspace, unifying tasks, docs, chat, calendar, and enterprise search, all supercharged by context-driven AI, empowering millions of teams to break free from silos, reclaim their time, and unlock new levels of productivity. At ClickUp, you’ll have the opportunity to learn, use, and pioneer AI in ways that shape not only our product, but the future of work itself. Join us and be part of a bold, innovative team that’s redefining what’s possible! 🚀 We’re looking for a Senior Data Engineer to help shape and scale our modern data platform. This role combines hands-on technical ownership with strategic influence across our data ecosystem. You’ll work with AWS serverless technologies, Snowflake, dbt, and Terraform to design reliable, scalable, and cost-effective data pipelines that empower analytics and AI-driven applications. You’ll play a key role in building the foundation that allows us to make smarter, faster, and data-driven decisions while also enabling AI-powered products and insights.

Requirements

  • 5+ years of professional experience in data engineering or software engineering, with emphasis on scalable data systems.
  • Deep experience with AWS cloud services (especially Lambda, Fargate, Step Functions, S3, DynamoDB, Aurora).
  • Hands-on expertise in Terraform and/or AWS CDK for managing infrastructure as code.
  • Strong knowledge of SQL and experience with Snowflake or another cloud data warehouse.
  • Proficiency with dbt and modern ELT patterns.
  • Solid coding skills in Python (or another general-purpose language).
  • Experience building pipelines to support AI/ML workflows (feature engineering, training data pipelines, model monitoring).
  • Understanding of CI/CD workflows, Git-based development, and containerization (Docker).
  • Knowledge of data quality, governance, and observability best practices.

Nice To Haves

  • Experience in both startup and enterprise environments, with ability to adapt to fast-changing priorities while maintaining quality.
  • Familiarity with orchestration frameworks (Airflow, Dagster, Prefect).
  • Knowledge of streaming/event-driven architectures (Kinesis, Kafka).
  • Prior experience mentoring or leading engineers.

Responsibilities

  • Design, build, and maintain cloud-native data infrastructure using Terraform for IaC.
  • Develop and optimize data pipelines leveraging AWS services (Lambda, Fargate, Step Functions, S3, Kinesis, DynamoDB, Aurora, etc.) and Snowflake.
  • Implement ELT workflows within dbt in partnership with our Analytics Engineering function
  • Build and maintain LLM frameworks, ensuring high-quality and cost effective outputs.
  • Automate infrastructure and pipeline deployments with CI/CD best practices.
  • Monitor, debug, and improve system performance with strong observability and logging practices.
  • Partner with cross-functional teams (analytics, data science, product, engineering) to deliver high-quality data solutions.
  • Mentor teammates and contribute to engineering standards, raising the technical bar across the team.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service