Adobe-posted 8 days ago
Full-time • Mid Level
Lehi, UT
5,001-10,000 employees

Our Company Changing the world through digital experiences is what Adobe’s all about. We give everyone—from emerging artists to global brands—everything they need to design and deliver exceptional digital experiences! We’re passionate about empowering people to create beautiful and powerful images, videos, and apps, and transform how companies interact with customers across every screen. We’re on a mission to hire the very best and are committed to creating exceptional employee experiences where everyone is respected and has access to equal opportunity. We realize that new ideas can come from everywhere in the organization, and we know the next big idea could be yours! The Opportunity AEM Cloud Service is a $1.4B line of business and the industry leader in experience management, used by Fortune 100 companies. We are seeking a Senior Software Engineer to design, build, and operate high - performance, cloud - native backend services supporting Adobe Experience Manager (AEM) as a Cloud Service. This role focuses on distributed systems, Java - based service development, cloud operations, and automation, with data engineering, analytics, and AI experience as a plus. What You Will Do Design, develop, and test backend services using Java, Spring Boot, and cloud - native patterns. Build and own high - traffic, mission - critical services running on Adobe and container platforms (Docker, Kubernetes, ECS/Fargate). Implement CI/CD automation using Jenkins, CircleCI, or similar tools to enable fast, reliable deployments. Participate in code reviews, architecture discussions, and operational readiness assessments. Analyze service telemetry, improve performance, and contribute to on - call rotations for production systems. Collaborate with cross - functional teams to deliver backend services and integrate data pipelines, SQL queries, and ETL workflows where required. Leverage AI tools to improve developer productivity and efficiency. Build GenAI and agentic workflows using the latest AI frameworks and tools. (Plus) Enhance ingestion and processing pipelines using data engineering frameworks, including Spark, Kafka, Kinesis, Airflow, and data lakes. (Plus) Support data flow, modeling, and analytics integration using Snowflake or Databricks , DBT, PowerBI, and modern data pipelines.

  • Design, develop, and test backend services using Java, Spring Boot, and cloud - native patterns.
  • Build and own high - traffic, mission - critical services running on Adobe and container platforms (Docker, Kubernetes, ECS/Fargate).
  • Implement CI/CD automation using Jenkins, CircleCI, or similar tools to enable fast, reliable deployments.
  • Participate in code reviews, architecture discussions, and operational readiness assessments.
  • Analyze service telemetry, improve performance, and contribute to on - call rotations for production systems.
  • Collaborate with cross - functional teams to deliver backend services and integrate data pipelines, SQL queries, and ETL workflows where required.
  • Leverage AI tools to improve developer productivity and efficiency.
  • Build GenAI and agentic workflows using the latest AI frameworks and tools.
  • (Plus) Enhance ingestion and processing pipelines using data engineering frameworks, including Spark, Kafka, Kinesis, Airflow, and data lakes.
  • (Plus) Support data flow, modeling, and analytics integration using Snowflake or Databricks , DBT, PowerBI, and modern data pipelines.
  • Strong experience in Java development, API design, distributed services, messaging, concurrency, and performance tuning.
  • 8+ years of industry experience in software engineering or distributed system development.
  • Deep understanding of cloud platforms including compute, storage, streaming, networking, and security.
  • Hands - on experience with container technologies: Docker, Kubernetes, and/or AWS ECS/Fargate.
  • Experience building automated systems using Jenkins or similar CI/CD frameworks.
  • Familiarity with observability platforms: New Relic, Grafana, Prometheus, CloudWatch, ELK, or similar.
  • Understanding of modern data processing frameworks such as Spark, Iceberg, Parquet, Airflow, and event - driven architectures.
  • Experience with real - time streaming systems: Kafka, Kinesis, Flink.
  • Experience building scalable ingestion pipelines, data platforms, or enterprise data services.
  • Experience with data modeling, SQL, Snowflake or Databricks , DBT, and PowerBI.
  • Familiarity with AEM or other large - scale CMS platforms (plus but not required).
  • Knowledge of LLM integrations, RAG architectures, or vector databases.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service