Responsibilities Administer Databricks account and workspaces across SDLC environments; standardize, configuration, naming, and operational patterns. Configure and maintain clusters/compute, job compute, SQL warehouses, runtime versions, libraries, repos, and workspace settings. Implement platform monitoring/alerting, operational dashboards, and health checks; maintain runbooks and operational procedures. Provide Tier 2/3 operational support: troubleshoot incidents, perform root-cause analysis, and drive remediation and preventive actions. Manage change control for upgrades, feature rollouts, configuration changes, and integration changes; document impacts and rollback plans. Enforce least privilege across platform resources (workspaces, jobs, clusters, SQL warehouses, repos, secrets) using role/group-based access patterns. Configure and manage secrets and secure credential handling (secret scopes / key management integrations) for platform and data connectivity. Enable and maintain audit logging and access/event visibility; support security reviews and evidence requests. Administer Unity Catalog governance: metastores, catalogs/schemas/tables, ownership, grants, and environment/domain patterns. Configure and manage external locations, storage credentials, and governed access to cloud object storage. Partner with governance stakeholders to support metadata/lineage integration, classification/tagging, and retention controls where applicable. Coordinate secure connectivity and guardrails with cloud/network teams: private connectivity patterns, egress controls, firewall/proxy needs. Requirements Must be able to obtain and maintain Moderate Risk Public Trust (MRPT) facility credentials/authorization. Note: US Citizenship is required for MRPT facility credentials/authorization at this work location. 7+ years in cloud/data platform administration and operations, including 4+ years supporting Databricks or similar platforms. Bachelor’s degree in Engineering, Computer Science, Information Systems or IT related discipline, or equivalent practical experience. Experience with the Scrum framework, Agile engineering, Lean methodologies or DevOps. Experience with one or more of the following: system. development, software development, hardware development, or mission support. Experience working with DevOps CI/CD related technologies (Azure DevOps, Git, Jenkins, Puppet, Docker, Confluence, Sonar Lint, and J-Unit. Ability to work at the conceptual level and with program leads, customers, and internal teams to ensure successful system development, integration, and deployment. Ability Hands-on experience administering Databricks (workspace administration, clusters/compute policies, jobs, SQL warehouses, repos, runtime management) and expertise using Databricks CLI. Strong Unity Catalog administration: metastores; catalogs/schemas; grants; service principals; external locations; storage credentials; governed storage access. Identity & Access Management proficiency: SSO concepts, SCIM provisioning, group-based RBAC, service principals, least-privilege patterns. Security fundamentals: secrets management, secure connectivity, audit logging, access monitoring, and evidence-ready operations. Cloud platform expertise (AWS ): IAM roles/policies, object storage security patterns, networking basics (VPC concepts), logging/monitoring integration. Automation skills: scripting and/or IaC using Terraform/CLI/REST APIs for repeatable configuration and environment promotion. MUST have at least one of the following Certifications: Cloud-related (e.g., DevOps, Security, and/or ML) Databricks Platform Administrator/Databricks AWS Platform Architect Databricks Certified Data Engineer Associate/Professional AWS Certified Solutions Architect Associate or Professional Rubix Solutions LLC is an EEO Employer - M/F/Disability/Protected Veteran Status View all jobs at this company
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Mid Level