McKinsey-posted 2 months ago
Full-time • Mid Level
San Francisco, CA
5,001-10,000 employees
Professional, Scientific, and Technical Services

Driving lasting impact and building long-term capabilities with our clients is not easy work. You are the kind of person who thrives in a high performance/high reward culture - doing hard things, picking yourself up when you stumble, and having the resilience to try another way forward. In return for your drive, determination, and curiosity, we'll provide the resources, mentorship, and opportunities you need to become a stronger leader faster than you ever thought possible. Your colleagues-at all levels-will invest deeply in your development, just as much as they invest in delivering exceptional results for clients. Every day, you'll receive apprenticeship, coaching, and exposure that will accelerate your growth in ways you won't find anywhere else. When you join us, you will have continuous learning, a voice that matters, a global community, and world-class benefits.

  • Design, build, and optimize modern data platforms that power advanced analytics and AI solutions.
  • Collaborate with clients and interdisciplinary teams to architect scalable pipelines and manage secure and compliant data environments.
  • Ensure data is accurate, accessible, and production-ready to enable clients to accelerate digital transformations and adopt AI responsibly.
  • Develop a streaming data platform to integrate telemetry for predictive maintenance in aerospace systems.
  • Implement secure data pipelines that reduce time-to-insight for a Fortune 500 utility company.
  • Optimize large-scale batch and streaming workflows for a global financial services client.
  • Develop pipelines for embeddings and vector databases to enable retrieval-augmented generation (RAG) for a global defense client.
  • Degree in Computer Science, Business Analytics, Engineering, Mathematics, or related field.
  • 2+ years of professional experience in data engineering, software engineering, or adjacent technical roles.
  • Proficiency in Python, Scala, or Java for production-grade pipelines, with strong skills in SQL and PySpark.
  • Hands-on experience with cloud platforms such as AWS, GCP, Azure, Oracle and modern data storage/warehouse solutions such as Snowflake, BigQuery, Redshift, and Delta Lake.
  • Practical experience with Databricks, AWS Glue, and transformation frameworks like dbt, Dataform, or Databricks Asset Bundles.
  • Knowledge of distributed systems such as Spark, Dask, Flink and streaming platforms like Kafka, Kinesis, Pulsar for real-time and batch processing.
  • Familiarity with workflow orchestration tools such as Airflow, Dagster, Prefect, CI/CD for data workflows, and infrastructure-as-code (Terraform, CloudFormation).
  • Understanding of DataOps principles including pipeline monitoring, testing, and automation, with exposure to observability tools such as Datadog, Prometheus, and Great Expectations.
  • Exposure to ML platforms such as Databricks, SageMaker, Vertex AI, MLOps best practices, and GenAI toolkits like LangChain, LlamaIndex, Hugging Face.
  • Willingness to travel as required.
  • Strong communication, time management, and resilience, with the ability to align technical solutions to business value.
  • Competitive salary based on location, experience, and skills.
  • Comprehensive benefits package to enable holistic well-being for you and your family.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service