About The Position

The Senior Platform / DevOps Engineer plays a critical role in designing, automating, and operating enterprise‑grade Databricks platforms in support of data and AI initiatives. This role partners closely with engineering, data, and cloud teams to ensure scalable, secure, and cost‑optimized platform services. The engineer operates with a high degree of autonomy, contributing deep technical expertise while supporting mission‑critical workloads. CDW is committed to being an AI-fluent organization. We’re looking for people who bring curiosity, a learner’s mindset, and a willingness to engage with ever-evolving technology and tools. We value adopting AI as a partner, openness to experimentation, and a shared interest in learning together on AI. Our goal is to create a culture where AI enhances—not replaces—human creativity and decision-making. You don’t need to be an expert today; what matters is your readiness to explore, adapt, and grow with us as we integrate AI responsibly and effectively into our work. Additionally, CDW is committed to fostering an equitable, transparent, and respectful hiring process for all applicants. During our application process, our goal is to understand your experience, strengths, skills, and qualifications. As an AI forward company, we see AI not just as a tool, but as a catalyst for new ways of thinking, creating, and communicating. We encourage candidates to embrace an AI mindset, one that’s curious, adaptive, and ready to explore what’s possible. We welcome thoughtful use of AI to expand your perspective and elevate how you share your story, while ensuring your application remains rooted in your own background, judgment, and voice.

Requirements

  • 5 years of experience in designing, developing, and deploying solutions on the Databricks platform with strong understanding of its architecture and capabilities.
  • Proficiency in Python, including PySpark, and SQL; experience with Scala or Java is a plus.
  • Strong understanding of cloud platforms such as AWS, Azure, or GCP, including compute, storage, identity and access management, and networking services.
  • Hands‑on experience with data warehousing, data lake, and Lakehouse architectures, including Delta Lake and Medallion Architecture concepts.
  • Proven experience building and maintaining ETL and ELT pipelines in enterprise environments.
  • Experience using Git‑based version control and CI/CD practices for data and platform deployments.
  • Strong problem‑solving, critical thinking, and communication skills, with the ability to operate independently in complex technical environments.

Responsibilities

  • Automate and manage Databricks workspaces, clusters, Unity Catalog, identity, networking, and secret scopes using Infrastructure‑as‑Code tools such as Terraform.
  • Design, implement, and maintain CI/CD pipelines using GitHub Actions and Azure DevOps to support data and platform deployments.
  • Establish and enforce platform guardrails including cluster policies, cost controls, logging, alerting, drift detection, and secure networking standards.
  • Ensure platform health, scalability, reliability, and cost optimization across Databricks environments.
  • Provide tier‑3 operational support, troubleshoot complex incidents, and drive root‑cause resolution.
  • Collaborate with data engineering, analytics, and AI teams to enable efficient development and deployment of data solutions.
  • Contribute to platform standards, documentation, and continuous improvement initiatives.
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service