About The Position

At ClickUp, we’re not just building software. We’re architecting the future of work! In a world overwhelmed by work sprawl, we saw a better way. That’s why we created the first truly converged AI workspace, unifying tasks, docs, chat, calendar, and enterprise search, all supercharged by context-driven AI, empowering millions of teams to break free from silos, reclaim their time, and unlock new levels of productivity. At ClickUp, you’ll have the opportunity to learn, use, and pioneer AI in ways that shape not only our product, but the future of work itself. Join us and be part of a bold, innovative team that’s redefining what’s possible! 🚀 We are seeking a highly skilled and motivated ML Engineer to join our team. This role sits at the intersection of machine learning, data science, and MLOps, requiring you to own the full lifecycle of ML systems — from feature engineering to model production deployment and monitoring. You will collaborate closely with data scientists, analysts, and data engineering teams to build robust, scalable ML systems that drive impactful business decisions. The Role Model Development & Deployment: Deploy production-grade machine learning models, ensuring reliability, low latency, and scalability. MLOps & Infrastructure: Build and maintain end-to-end ML pipelines, including automated training, evaluation, versioning, deployment, and monitoring workflows. Feature Engineering: Partner with data scientists to design, implement, and optimize feature pipelines that feed into ML models, ensuring data quality and freshness. Model Performance & Monitoring: Establish monitoring frameworks to track model performance, detect drift, and trigger retraining as needed. Data Science Enablement: Work alongside data scientists to translate research prototypes into production-ready systems, and create tooling that accelerates experimentation. Collaboration: Act as a bridge between data science and software engineering teams, ensuring seamless integration of ML models into broader product and platform architectures. Performance Optimization: Continuously improve model inference speed, pipeline efficiency, and overall system scalability.

Requirements

  • Experience: 4+ years of experience in ML engineering, data engineering, or a related role, with at least 2 years focused on building and deploying machine learning systems in production.
  • Technical Skills:
  • Strong proficiency in Python and experience with ML frameworks (e.g., TensorFlow, PyTorch, scikit-learn).
  • Hands-on experience with MLOps tools and platforms (e.g., MLflow, SageMaker, Kubeflow, Vertex AI).
  • Solid SQL skills and experience with data warehouses and feature stores.
  • Experience with big data technologies (e.g., Spark, Hadoop) and streaming frameworks.
  • Expertise in cloud platforms (e.g., AWS, GCP, Azure) and containerization tools (e.g., Docker, Kubernetes).
  • Familiarity with CI/CD practices applied to ML workflows.
  • ML Knowledge: Strong understanding of machine learning algorithms, model evaluation techniques, feature engineering, and experiment tracking.
  • Soft Skills: Strong problem-solving abilities, excellent communication skills, and a collaborative mindset with the ability to work across technical and non-technical stakeholders.

Nice To Haves

  • Education: Bachelor's or Master's degree in Computer Science, Machine Learning, Data Science, Engineering, or a related field.

Responsibilities

  • Deploy production-grade machine learning models, ensuring reliability, low latency, and scalability.
  • Build and maintain end-to-end ML pipelines, including automated training, evaluation, versioning, deployment, and monitoring workflows.
  • Partner with data scientists to design, implement, and optimize feature pipelines that feed into ML models, ensuring data quality and freshness.
  • Establish monitoring frameworks to track model performance, detect drift, and trigger retraining as needed.
  • Work alongside data scientists to translate research prototypes into production-ready systems, and create tooling that accelerates experimentation.
  • Act as a bridge between data science and software engineering teams, ensuring seamless integration of ML models into broader product and platform architectures.
  • Continuously improve model inference speed, pipeline efficiency, and overall system scalability.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service