About The Position

As a Specialist Solutions Architect (SSA)—AI Tooling & System Management, you will build and manage the AI tooling stack and system infrastructure that empowers Field Engineering to deliver customer outcomes with higher velocity. These capabilities will be utilized by our Go-To-Market teams, including Solutions Architects and Account Executives, to accelerate technical demos, proofs of concept, and customer engagements. You will bring consistency to our internal AI tooling stack, establish standards for AI-driven development practices, and scale these capabilities across the department. A critical aspect of this role is building the infrastructure that enables agent networks to perform with high quality and reliability—including context management systems, data integrations, and supporting tooling. Additionally, you will develop internal applications and technical tools that enhance the overall lifecycle, track adoption metrics to measure impact, and partner with stakeholders to drive continuous improvement through intelligent automation and AI-augmented workflows.

Requirements

  • 5+ years experience in a technical role with expertise in the following:
  • Cloud Platforms & Architecture: Cloud-native architecture in AWS, Azure, or GCP; serverless architecture
  • AI Tooling: AI-assisted development platforms (Databricks AI Assistant, Claude Code, Cursor), prompt engineering, AI workflow automation
  • Context Management & Agent Networks: Vector databases, RAG (Retrieval Augmented Generation), embedding models, knowledge base systems, multi-agent orchestration
  • Application Development: Building internal tools and web applications with TypeScript/Node.js, Python, or similar modern stacks
  • Metrics & Analytics: Instrumentation and telemetry systems, dashboards, adoption tracking, measuring tool effectiveness, data visualization
  • System Integration & Data Pipelines: Designing integrations between enterprise systems, ETL processes, API design, data synchronization, event-driven architectures
  • Security & Platform Administration: Platform security, network security, data security, Gen AI & model security, encryption, vulnerability management, compliance, secure API integration, identity management, infrastructure management
  • Infrastructure Automation & DevOps: IaC tools (Terraform), CI/CD pipelines, GitOps workflows
  • Deep Specialty Expertise in at least one of the following:
  • Security — Securing AI development environments, managing identities, securing data pipelines for AI, and implementing security best practices for AI workflows
  • System Integrations & Application Deployment — Designing and implementing integrations between disparate systems to enable AI data flows. Deploying applications on cloud infrastructure using best practices in security, networking, and scalability.
  • Developer Experience & AI Tooling — Modern developer workflows, AI-assisted development, optimizing developer productivity through AI tooling
  • Bachelor's degree in Computer Science, Information Systems, Engineering, or equivalent work experience
  • Hands-on experience with Python
  • 2+ years experience with data technologies (Spark, Kafka, data pipelines) and modern application architectures
  • Ability to meet expectations for technical training and role-specific outcomes within 6 months of hire

Nice To Haves

  • Proficiency in TypeScript/JavaScript highly desirable
  • Experience with modern web application frameworks (React, Next.js) and deployment patterns is a plus
  • Hands-on experience with AI platforms and AI workflow automation
  • Experience building solutions on Databricks is a big plus

Responsibilities

  • Architect production-level AI tooling deployments that meet security, networking, and data integration requirements
  • Build and maintain internal AI tooling infrastructure for demos, learning, building POCs, and production workflows across platforms, including AI-assisted development environments, Databricks environments, and cloud-based tooling
  • Establish consistency in the AI tooling stack by defining standards, best practices, and reusable patterns that enable Field Engineering to build with AI efficiently and reliably at scale
  • Build context management infrastructure for agent networks, including vector databases, knowledge bases, and retrieval systems that ensure AI agents have access to the right information at the right time
  • Design and implement system integrations to bring data from enterprise sources into AI applications, ensuring secure, scalable, and reliable data flows
  • Develop internal applications to streamline Field Engineering workflows, improve demo and builder environments, and accelerate customer engagement velocity
  • Track adoption metrics and tooling effectiveness by instrumenting the AI tooling stack, building dashboards, and providing data-driven insights to leadership on adoption rates, productivity gains, and ROI
  • Manage AI tooling infrastructure and spend by overseeing cloud costs, monitoring consumption as teams scale, resolving capacity issues, and deploying automation to reduce operational overhead
  • Partner with Scale and Technical Enablement teams to develop documentation, AI-powered development patterns, and training materials
  • Support Solution Architects with custom proof of concept environments, AI tooling configurations, and technical guidance for customer engagements

Benefits

  • At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region, please visit https://www.mybenefitsnow.com/databricks.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service