Salesforce-posted 5 days ago
$184,000 - $253,000/Yr
Full-time • Mid Level
San Francisco, CA
5,001-10,000 employees

Salesforce is the #1 AI CRM, where humans with agents drive customer success together. Here, ambition meets action. Tech meets trust. And innovation isn’t a buzzword — it’s a way of life. The world of work as we know it is changing and we're looking for Trailblazers who are passionate about bettering business and the world through AI, driving innovation, and keeping Salesforce's core values at the heart of it all. Ready to level-up your career at the company leading workforce transformation in the agentic era? You’re in the right place! Agentforce is the future of AI, and you are the future of Salesforce. We are looking for exceptional Lead Engineers to build the engine that powers Salesforce’s enterprise intelligence. In this role, you will be a hands-on technical contributor responsible for modernizing our core data ecosystem. You will move beyond simple ETL scripts to build a robust, software-defined Data Mesh using Snowflake, dbt, Airflow, and Informatica . You will bridge the gap between "Data Engineering" and "Software Engineering"—treating data pipelines as production code, automating infrastructure with Terraform, and optimizing high-scale distributed systems to enable AI and Analytics across the enterprise.

  • Core Platform Engineering & Architecture
  • Build & Ship: Design and implement scalable data pipelines and transformation logic using Snowflake (SQL) and dbt . Replace legacy hardcoded scripts with modular, testable, and reusable data components.
  • Orchestration: Engineer robust workflows in Airflow . Write custom Python operators and ensure DAGs are dynamic, factory-generated, and resilient to failure.
  • Performance Tuning: Own the performance of your datasets. Deep dive into query profiles, optimize pruning/clustering in Snowflake, and reduce credit consumption while improving data freshness.
  • DevOps, Reliability & Standards
  • Infrastructure as Code: Manage the underlying platform infrastructure (warehouses, roles, storage integration) using Terraform or Helm. Click-ops is not an option.
  • CI/CD & Quality: Enforce a strict "DataOps" culture. Ensure every PR has unit tests, schema validation, and automated deployment pipelines.
  • Reliability (SRE): Build monitoring and alerting (Monte Carlo, Grafana, Newrelic, Splunk) to detect data anomalies before stakeholders do.
  • Collaboration & Modernization
  • Data Mesh Implementation: Work with domain teams (Sales, Marketing, Finance) to onboard them to the platform, helping them decentralize their data ownership while adhering to platform standards.
  • AI Readiness: Prepare structured data for AI consumption, ensuring high-quality, governed datasets are available for LLM agents and advanced analytics models.
  • Focus: System Design & Technical Leadership. You proactively identify problems (e.g., "Our ingestion pattern won't scale 10x") and design the architectural solution. You lead the technical direction for a squad.
  • Scope: You own entire subsystems or domain architectures. You are the "Tech Lead" for a group of engineers, driving technical consensus, RFCs, and coordinating cross-team dependencies.
  • Engineering Roots: Strong background in software engineering (Python/Java/Go) applied to data. You are comfortable writing custom API integrations and complex Python scripts.
  • The Modern Stack: Deep production experience with Snowflake (architecture/tuning) and dbt (Jinja/Macros/Modeling).
  • Workflow Orchestration: Advanced proficiency with Airflow (Managed Workflows for Apache Airflow).
  • Cloud Native: Hands-on experience with AWS services (S3, Lambda, IAM, ECS) and containerization (Docker/Kubernetes).
  • DevOps Mindset: Experience with Git, CI/CD (GitHub Actions/Jenkins), and Terraform.
  • 8+ years of experience, with a proven track record of leading technical projects or small teams.
  • Knowledge Graph Experience: Familiarity with Graph Databases ( Neo4j ) or Semantic Standards (RDF/SPARQL, TopQuadrant ) is a strong plus as we integrate these technologies into the platform.
  • Open Table Formats: Experience with Apache Iceberg or Delta Lake.
  • Streaming: Experience with Kafka or Snowpipe Streaming.
  • AI Integration: Experience using AI coding assistants (Copilot, Cursor) to accelerate development.
  • time off programs
  • medical, dental, vision, mental health support
  • paid parental leave
  • life and disability insurance
  • 401(k)
  • an employee stock purchasing program
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service