Principal Data Engineer

SS&C TechnologiesWaltham, MA
$160,000 - $165,000Hybrid

About The Position

SS&C is a leading financial services and healthcare technology company headquartered in Windsor, Connecticut, with over 27,000 employees in 35 countries. Approximately 20,000 financial services and healthcare organizations rely on SS&C for expertise, scale, and technology. This role is for a Principal Data Engineer based in Waltham, MA, working in a hybrid model. The position involves leading the design and development of scalable real-time data platforms and distributed data processing systems. Key technologies include Java-based backend engineering, microservices architecture, event-driven systems, Apache Kafka, Apache Flink, cloud-native platforms, and AWS. The ideal candidate will have extensive experience building enterprise-scale streaming platforms, developing resilient microservices, optimizing large-scale data pipelines, and promoting engineering best practices. Collaboration with Product, Architecture, Application Development, Analytics, SRE, and DevOps teams is essential for delivering scalable, reliable, and high-performance data solutions. SS&C offers a unique environment combining proprietary technology with deep industry expertise to support complex financial and health care operations, enabling clients to manage data, automate processes, and scale their businesses. Employees gain exposure to modern platforms, evolving technologies, real-world operational challenges, and large-scale enterprise environments.

Requirements

  • Strong expertise in Java, Spring Boot, REST API development, and Microservices Architecture
  • Proficiency in Python is preferred; experience with Node.js is a plus
  • Strong hands-on experience with Apache Kafka, Kafka Connect, Kafka Streams, Apache Flink, and AWS MSK
  • Extensive experience designing and developing cloud-native applications on AWS
  • Solid expertise in Kubernetes, Docker, Terraform/CloudFormation, ECS, and EKS environments
  • Strong knowledge of Oracle, PostgreSQL, Amazon Redshift, and Amazon Aurora databases
  • Experience with data modeling, database optimization, query tuning, indexing, and performance improvement strategies
  • Proven experience developing and maintaining CI/CD pipelines using Jenkins, GitHub Actions, GitLab CI/CD, Maven/Gradle, and SonarQube
  • Strong expertise in monitoring, logging, and observability tools including CloudWatch, Prometheus, Grafana, ELK Stack, and related operational frameworks

Nice To Haves

  • Bachelor’s or master’s degree in computer science, Engineering, Information Systems, or a related field
  • 8+ years of software engineering or data engineering experience
  • 5+ years of hands-on experience with Apache Kafka and event-driven architectures
  • 4+ years of experience with Apache Flink and real-time stream processing
  • Strong experience designing scalable, fault-tolerant, and highly available distributed systems
  • Proven experience leading enterprise-scale platform initiatives and mentoring engineering teams

Responsibilities

  • Lead the design, and development of scalable real-time data platforms and event-driven streaming solutions using Apache Kafka, Apache Flink, Java, Spring Boot, and AWS cloud-native technologies.
  • Design and implement high-performance batch and streaming data pipelines with advanced stream-processing capabilities including stateful processing, windowing, event-time processing, checkpointing, fault tolerance, and exactly once semantics.
  • Develop scalable microservices, REST APIs, reusable data frameworks, and enterprise data processing components.
  • Drive platform modernization, technical design reviews, engineering standards, and adoption of innovative technologies to improve scalability, reliability, performance, and operational efficiency.
  • Design and maintain cloud-native infrastructure, CI/CD pipelines, deployment automation, and containerized applications using Kubernetes, Docker, Terraform/CloudFormation, ECS/EKS, and AWS services including S3, MSK, Redshift, Aurora, RDS, Lambda, Glue, and CloudWatch.
  • Design optimized relational and analytical data models using Oracle, PostgreSQL, Redshift, and Aurora, including performance tuning, indexing, partitioning, and query optimization.
  • Implement observability, monitoring, alerting, logging, data quality validation, and reconciliation frameworks to ensure operational excellence and platform reliability.
  • Troubleshoot complex production issues, perform root-cause analysis, and collaborate with SRE and DevOps teams to improve platform stability, scalability, and deployment automation.
  • Provide technical leadership, mentorship, and guidance to engineering teams while driving best practices, governance, and continuous improvement initiatives.

Benefits

  • medical, dental, and vision coverage
  • a 401(k) plan with company match
  • paid time off, holidays, and parental leave
  • professional development reimbursement opportunity
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service