Senior Data Engineer

GMAustin, TX
Hybrid

About The Position

This role is categorized as hybrid. This means the successful candidate is expected to report to Austin Technical Center three times per week, at minimum [or other frequency dictated by the business if more than 3 days]. The Role We are looking for a Java Microservices Developer to design, build, and support scalable, resilient microservices for our Daignostics platform team. You will work closely with product managers, architects, and DevOps engineers to deliver secure, performant APIs and back-end services that power critical business and customer-facing applications. What You’ll Do Own the end-to-end design, development, and operation of scalable data engineering pipelines and backend services using Java, Quarkus , Spring Boot ensuring reliability, observability, and maintainability. Lead the design and implementation of cron -based and event-driven orchestration services that retrieve and process data from multiple enterprise systems via REST APIs and messaging platforms. Architect and implement real-time data processing solutions using Kafka and Azure Event Hub, including schema design, consumer group strategy, and resiliency patterns. Design and optimize relational data models and database solutions using PostgreSQL and other relational data stores, including indexing strategies, query optimization, and performance tuning at scale. Drive the deployment, scaling, and lifecycle management of services on Azure Kubernetes Service (AKS), including workload identity, networking, and security configuration. Define and implement CI/CD pipelines using GitHub Actions/Workflows, and manage automated, GitOps-based deployments using ArgoCD across multiple environments. Lead infrastructure automation using Terraform, establishing reusable modules, environment standards, and best practices for cloud resource provisioning and governance, including Datadog monitor creation and management. Design and implement end-to-end observability using Prometheus, Datadog, and related tooling, including metrics, logs, traces, dashboards, and alerting with clear SLOs/SLIs. Build and maintain data processing workflows using Databricks and distributed data frameworks, including batch and streaming jobs, job orchestration, and cost-optimized compute . Collaborate closely with product, architecture, and cross-functional engineering teams to refine requirements, define technical roadmaps, and translate business outcomes into robust technical designs. Drive performance, reliability, and scalability improvements across data and service layers, including load testing, capacity planning, and performance benchmarking. Troubleshoot complex production issues, perform root cause analysis, and implement durable fixes and resiliency patterns . Champion engineering best practices (code reviews, testing strategy, documentation, security, and monitoring) and help evolve team standards, patterns, and reference architectures. Mentor and coach engineers on the team, providing technical guidance, pairing, and feedback to elevate overall engineering quality and delivery.

Requirements

  • Bachelor’s degree in Computer Science , Software Engineering, Information Systems, or related field, or equivalent practical experience.
  • 6 - 8+ years of professional experience in software engineering and/or data engineering, with a strong track record of delivering production systems.
  • Strong proficiency in Java and object-oriented design, with experience applying design patterns and clean architecture principles.
  • Hands-on experience building Quarkus and Spring Boot applications, including configuration management, dependency injection, and integration with external services.
  • Demonstrated experience designing and consuming REST APIs and building microservices architectures, including service contracts, versioning, and backward compatibility.
  • Strong knowledge of event-driven architectures and real-time data processing using Kafka or Azure Event Hub (topics, partitions, consumer groups, schema evolution).
  • Deep experience with relational databases, especially PostgreSQL, including schema design, performance tuning, query optimization, and monitoring.
  • Hands-on experience with Azure cloud services, especially AKS, networking ( ingress , load balancers), identity, and managed data/services.
  • Experience implementing and maintaining CI/CD pipelines using GitHub Actions/Workflows, including build, test, quality gates, and deployment automation.
  • Solid Infrastructure-as-Code experience with Terraform, including modules, environment strategy, state management, and authoring Datadog monitors via code.
  • Experience with observability tooling such as Prometheus and Datadog, and the ability to define meaningful metrics, dashboards, and alerts.
  • Strong understanding of containerization with Docker and orchestration with Kubernetes, including configuration, scaling, and security best practices.
  • Proven ability to lead technical initiatives, drive decisions across stakeholders, and own systems from design through production support.
  • Excellent communication skills with the ability to explain complex technical concepts to both technical and non-technical audiences.

Nice To Haves

  • Experience with Azure platform services and architecture patterns, including networking, security, identity, and data services.
  • Experience with email marketing or customer communication platforms, such as Adobe Journey Optimizer (AJO) or similar tools, including template strategy, segmentation, and orchestration workflows.
  • Experience integrating enterprise marketing/communication tools with backend services and event streams for personalized, real-time experiences.
  • Understanding of security best practices in cloud-native and API development (OAuth/OpenID Connect, secret management, data encryption, least-privilege access).
  • Familiarity with telemetry, distributed tracing, and log analytics in cloud environments, and experience using them to diagnose and optimize production systems.
  • Experience building or operating large-scale customer engagement, notification, or messaging systems with high availability and strict SLAs.
  • Prior experience mentoring engineers, acting as a technical lead, or driving architecture decisions within a high-performing engineering team.
  • Interest or experience in applying AI and machine learning (including generative AI and LLM-based services) to enhance data platforms, developer productivity, and customer-facing capabilities.

Responsibilities

  • Own the end-to-end design, development, and operation of scalable data engineering pipelines and backend services using Java, Quarkus , Spring Boot ensuring reliability, observability, and maintainability.
  • Lead the design and implementation of cron -based and event-driven orchestration services that retrieve and process data from multiple enterprise systems via REST APIs and messaging platforms.
  • Architect and implement real-time data processing solutions using Kafka and Azure Event Hub, including schema design, consumer group strategy, and resiliency patterns.
  • Design and optimize relational data models and database solutions using PostgreSQL and other relational data stores, including indexing strategies, query optimization, and performance tuning at scale.
  • Drive the deployment, scaling, and lifecycle management of services on Azure Kubernetes Service (AKS), including workload identity, networking, and security configuration.
  • Define and implement CI/CD pipelines using GitHub Actions/Workflows, and manage automated, GitOps-based deployments using ArgoCD across multiple environments.
  • Lead infrastructure automation using Terraform, establishing reusable modules, environment standards, and best practices for cloud resource provisioning and governance, including Datadog monitor creation and management.
  • Design and implement end-to-end observability using Prometheus, Datadog, and related tooling, including metrics, logs, traces, dashboards, and alerting with clear SLOs/SLIs.
  • Build and maintain data processing workflows using Databricks and distributed data frameworks, including batch and streaming jobs, job orchestration, and cost-optimized compute .
  • Collaborate closely with product, architecture, and cross-functional engineering teams to refine requirements, define technical roadmaps, and translate business outcomes into robust technical designs.
  • Drive performance, reliability, and scalability improvements across data and service layers, including load testing, capacity planning, and performance benchmarking.
  • Troubleshoot complex production issues, perform root cause analysis, and implement durable fixes and resiliency patterns .
  • Champion engineering best practices (code reviews, testing strategy, documentation, security, and monitoring) and help evolve team standards, patterns, and reference architectures.
  • Mentor and coach engineers on the team, providing technical guidance, pairing, and feedback to elevate overall engineering quality and delivery.

Benefits

  • GM offers a variety of health and wellbeing benefit programs.
  • Benefit options include medical, dental, vision, Health Savings Account, Flexible Spending Accounts, retirement savings plan, sickness and accident benefits, life insurance, paid vacation & holidays, tuition assistance programs, employee assistance program, GM vehicle discounts and more.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service