Adobe-posted 10 days ago
Full-time • Mid Level
San Jose, CA
5,001-10,000 employees

Changing the world through digital experiences is what Adobe’s all about. We give everyone—from emerging artists to global brands—everything they need to design and deliver exceptional digital experiences! We’re passionate about empowering people to create beautiful and powerful images, videos, and apps, and transform how companies interact with customers across every screen. We’re on a mission to hire the very best and are committed to creating exceptional employee experiences where everyone is respected and has access to equal opportunity. We realize that new ideas can come from everywhere in the organization, and we know the next big idea could be yours! Are you ready to have fun with data? As a Data Engineer focused on cloud spend optimization at Adobe, you’ll play a key role in transforming massive amounts of cloud usage data into actionable insights. You’ll combine strong data engineering fundamentals with analytical curiosity—helping surface patterns, trends, and opportunities to drive more efficient cloud operations across Adobe’s platforms and products. This role sits at the intersection of data engineering, analytics, and AI innovation. You’ll build and maintain data pipelines that power our cost insights, partner with analysts and data scientists to interpret results, and experiment with emerging approaches—including AI Agent development—to automate data analysis and accelerate decision-making.

  • Design, build, and maintain scalable and reliable data pipelines for cloud spend and utilization analytics.
  • Develop data models and transformations that make complex cloud usage data accessible and useful.
  • Analyze large datasets to identify trends, anomalies, and optimization opportunities.
  • Partner with data scientists and product engineers to translate findings into business and technical actions.
  • Contribute to the development of data-driven tools, including early experimentation with AI Agents for insight generation and automation.
  • Ensure data quality, integrity, and performance across all stages of the pipeline.
  • Document workflows, participate in code reviews, and continuously improve data processes.
  • BS in Computer Science, Engineering, or a related field with 4+ years of experience in data engineering or data science.
  • Strong proficiency in SQL and Python for data wrangling, automation, and analysis.
  • Experience with AWS, DBT, and Airflow (or similar modern data stack tools).
  • Solid understanding of data modeling, warehousing concepts, and ETL/ELT pipeline design.
  • Comfortable with exploratory data analysis and visualization using tools like Pandas, Matplotlib, or Jupyter.
  • Analytical mindset with strong attention to detail and problem-solving skills.
  • Strong communication skills and a collaborative, growth-oriented attitude.
  • Curiosity about AI Agent development and how generative AI can transform analytics workflows.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service