STS Senior Director, Developer Experience and AI Experience

Samsung ElectronicsMountain View, CA
$280,000 - $320,000Hybrid

About The Position

Samsung Ads Engineering is seeking a Sr. Director, AI Experience (AIX), Developer Experience & SRE to lead three complementary but distinct disciplines that together raise the productivity ceiling of our entire Service Business Teams (SBT) organization. AI Experience (AIX) operates as an umbrella initiative across all of SBT — not just engineering. This team identifies, prioritizes, and distributes AI-powered productivity solutions across all disciplines and functions. That means embedding AI capabilities into the SaaS tools our people already use (Salesforce, Slack, Atlassian), putting AWS Bedrock-backed tools in the hands of every SBT employee, and continuously evaluating where AI can remove toil, improve decisions, and accelerate output organization-wide. Developer Experience (DevEx) zooms in on engineering teams specifically — not as an AI-first initiative, but as a listening-first one. Developers encounter friction that is often not best solved by AI: inefficient practices, fragmented communications, inadequate tooling, and slow build pipelines. This team surfaces and eliminates those systematic friction patterns across feedback loops, flow state, and cognitive load, treating developer experience as a measurable, improvable product. Site Reliability Engineering (SRE) applies a similar operating model to a different domain: systematic patterns of service and component unreliability in production. This team emphasizes observability, data-driven prioritization, and an enablement model — partnering with engineering teams to embed reliability practices, define SLOs, and build the organizational habits that protect uptime and business continuity. All three disciplines share a common posture: they identify patterns, prioritize interventions, enable teams, and measure outcomes. This leader must be able to hold all three lanes, build coherent strategy across them, and communicate their collective impact to senior leadership in business terms.

Requirements

  • Bachelor’s degree in Computer Science, Information Technology, Business Systems, or related field. Non-traditional backgrounds considered with strong hands-on experience.
  • 8+ years leading technology initiatives that drive business outcomes, with 5+ years managing people.
  • Proven ability to bridge technical teams and non-technical business users in a large, globally distributed organization.
  • Hands-on leader comfortable building and configuring solutions while managing a team of ICs.
  • Experience working in a large tech organization; familiarity with regulated environments and global stakeholder dynamics a plus.
  • Working knowledge of agentic AI frameworks (e.g., LangChain, CrewAI, Amazon Bedrock Agents, or similar).
  • Experience evaluating, configuring, and deploying AI tools within SaaS platforms (e.g., Salesforce, Slack, Atlassian) and/or cloud AI services (AWS Bedrock, Azure OpenAI, or similar).
  • Understanding of prompt engineering and AI workflow orchestration; comfortable translating business needs into AI workflow designs.
  • Familiarity with responsible AI practices and enterprise AI governance in regulated environments.
  • Experience identifying, measuring, and reducing developer friction in engineering organizations — whether called DevEx, developer productivity, builder experience, or engineering experience.
  • Familiarity with developer productivity measurement frameworks (SPACE, DORA, DXI) and the ability to design mixed-methods measurement programs (telemetry + surveys + interviews), including Trust as a critical measurement dimension in AI-augmented development.
  • Experience defining golden paths, improving CI/CD pipelines, reducing feedback loop times, and investing in platform engineering as a foundation for developer productivity.
  • Strong background in observability and monitoring (e.g., Datadog, Grafana, CloudWatch, or similar).
  • Experience defining and operating against SLOs/SLAs in a production environment.
  • Familiarity with incident management, on-call practices, and post-incident review processes.
  • Hands-on experience with cloud infrastructure (AWS), container orchestration (Kubernetes), and CI/CD pipelines.
  • Exposure to chaos engineering, disaster recovery, and business continuity planning.
  • Experience with alerting and incident response tooling (e.g., PagerDuty).
  • Excellent communicator — able to make complex AI, DevEx, and SRE concepts accessible to non-technical audiences and translate developer and operational pain into business language.
  • Deep developer empathy — a listening-first leader who talks to engineers before prescribing solutions.
  • Strong organizational change skills — can sell initiatives, build coalitions, and sustain momentum over time.
  • Strategic thinker with a bias for action, able to manage multiple initiatives simultaneously across disciplines.

Nice To Haves

  • Experience driving AI adoption at scale in large organizations, including deploying AI into enterprise SaaS tooling.
  • Familiarity with the DORA AI Capabilities Model and understanding of how high-quality internal platforms are foundational for unlocking AI value.
  • Background in or exposure to ad-tech, high-throughput systems, or real-time data pipelines.
  • Knowledge of how AI tools change developer workflows — the shift from writing code to reviewing, prompting, and steering AI — and how to measure these changes ethically (team-level aggregation, transparency, privacy boundaries).
  • Comfortable influencing platform roadmaps as a key stakeholder rather than a direct builder.
  • Hands-on experience prioritized over certifications.

Responsibilities

  • Own the AIX strategy and roadmap for all of SBT, in partnership with Cloud and AI Architects — identifying where AI can drive the most productivity across all disciplines, not just engineering.
  • Lead the identification, evaluation, and deployment of AI capabilities within existing SaaS tools (Salesforce, Slack, Atlassian, and others), minimizing context-switching and maximizing adoption.
  • Champion and operationalize AWS Bedrock-backed tools for all SBT staff — enabling business users to leverage agentic AI workflows without requiring deep technical skills.
  • Lead a small team designing and configuring agentic AI workflows for non-technical business users; serve as a bridge between technical AI capabilities and real-world business needs.
  • Partner with security, platform, and product teams to ensure AI initiatives are secure, compliant, and aligned to business KPIs — particularly in regulated environments.
  • Stay ahead of the AI landscape — continuously evaluating emerging capabilities (tools, models, frameworks) and driving adoption at scale where value is clear.
  • Measure AIX impact in business terms: time recovered, toil eliminated, adoption rate, and business outcomes enabled across SBT functions.
  • Lead a listening-first approach to DevEx: regularly talk to developers (“Walk me through yesterday — what was delightful? What was frustrating? Where did you slow down?”) to surface friction before prescribing solutions.
  • Identify and eliminate friction across the three essential dimensions: feedback loops, flow state, and cognitive load.
  • Recognize that most productivity gains come from process changes, not technology — prioritize low-lift, high-impact improvements (broken approval workflows, manual toil, flaky tests, slow provisioning, fragmented communications) before reaching for new tools.
  • Define and champion golden paths: well-supported, standardized developer workflows that reduce cognitive overhead and accelerate onboarding.
  • Treat developer experience as a product — continuously measuring, iterating, and improving based on developer feedback and usage data.
  • Spot warning signs of excessive friction: broken builds, flaky tests, overlong processes, difficult environment provisioning, high switching costs between teams, and reluctance to move across the organization.
  • Instrument and monitor the impact of AI tools on developer workflows, recognizing that AI is transforming flow state itself — developers now spend more time reviewing, prompting, and steering AI than writing code, which changes how friction manifests.
  • Invest in platform engineering as the foundation for DevEx: fast builds, reliable deployments, good documentation, and strong internal platform quality are what enable teams to benefit from AI tools.
  • Establish and maintain SLOs/SLAs across critical services; drive data-driven prioritization of reliability investments.
  • Lead the observability strategy — ensuring teams have the instrumentation, dashboards, and alerting needed to detect and resolve incidents quickly, using tools such as Datadog, Grafana, and CloudWatch.
  • Champion production resilience through chaos engineering, capacity planning, incident management processes, and business continuity planning.
  • Partner with engineering teams to embed reliability practices (deployment safety, rollback procedures, runbooks) into their development workflows — operating as an Enablement team that trains rather than builds in isolation.
  • Manage and continuously improve on-call practices and post-incident review processes, ensuring learning flows back into preventive investments.
  • Establish a DevEx measurement practice using SPACE, DORA, and DXI frameworks, combining quantitative telemetry (the “what”) with qualitative surveys and interviews (the “why”).
  • Extend the SPACE framework with Trust as a critical dimension for AI-augmented development — assessing whether developers over-trust AI output (shipping bugs) or under-trust it (wasting time double-checking correct code), and tracking code survivability rates.
  • Track and report on AI-specific metrics (prompting efficiency, validation effort, trust calibration) alongside traditional delivery metrics (deployment frequency, lead time, MTTR, change fail rate).
  • Translate developer experience and reliability improvements into business language: recovering time (developer hours to dollar value), saving money (tool consolidation, reduced incidents), making money (accelerating revenue via feature velocity, quality, and reliability), and proving correlations between technical and business outcomes.
  • Frame metrics differently for different audiences: developers care about time savings, reduced toil, and improved focus time; leadership cares about cost savings, speed to market, and competitive advantage.
  • Build compelling narratives for leadership — reasonably credible data woven into a clear story matters more than perfect ROI calculations.
  • Act as the primary liaison between engineering, platform, product, and business teams — able to influence real change even without direct authority.
  • Sell AIX, DevEx, and SRE initiatives to leadership; plan communications and manage organizational change across all three disciplines.
  • Translate complex technical concepts into actionable insights for non-technical audiences; ensure business requirements are clearly translated into technical specifications.
  • Design and deliver enablement programs, training, and documentation for business users adopting AI tools.
  • Define an integrated strategy and roadmap across AIX, DevEx, and SRE, aligning each to the organization’s long-term goals while maintaining clear ownership and success criteria for each discipline.
  • Apply a structured improvement methodology: secure leadership buy-in, start with quick wins, build feedback loops, measure impact, scale systematically, and communicate wins to sustain momentum.
  • Navigate the J-curve of DevEx improvement: early quick wins deliver big, visible impact; a plateau follows as obvious projects are completed; then acceleration compounds once telemetry infrastructure and platform foundations are in place.
  • Shift organizational focus from output metrics (lines of code, commit frequency) to outcome metrics (problem-solving speed, cognitive load, breadth of exploration, service reliability).
  • Champion the DORA AI Capabilities: clarify AI policies, connect AI to internal context, prioritize foundational practices, fortify safety nets, invest in internal platforms, focus on end-users, and foster continuous improvement.
  • Mentor and grow a high-performing team spanning all three disciplines; recruit, develop, and retain strong individual contributors.
  • Conduct performance reviews and provide constructive, growth-oriented feedback.
  • Foster a culture of continuous learning, experimentation, and evidence-based decision-making.

Benefits

  • Medical
  • Dental
  • Vision
  • Life Insurance
  • 401(k)
  • Employee Purchase Program
  • Tuition Assistance (after 6 months)
  • Paid Time Off
  • Student Loan Program (after 6 months)
  • Wellness Incentives
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service