Principal Software Engineer

Liberty Mutual InsuranceBoston, MA
Hybrid

About The Position

About the Team – Claims Loss Data Capture: Our mission is to develop a platform that can collect, update, and retrieve loss data for both new and existing claims through easy‑to‑consume APIs that are fast, secure, flexible, configurable, and resilient to core system constraints. We enable downstream components to react proactively and asynchronously, powering experiences across FNOL, claim registration, coverage, telematics crash detection, and customer communications. We work in an agile framework with a strong culture of collaboration, continuous improvement, and engineering excellence, partnering closely with claims, policy, and digital teams across the organization. This is a hybrid role (two days in the office a week) in one of our five tech locations: Plano, TX, Columbus, OH, Indianapolis, IN, Boston, MA and Portsmouth, NH Role Overview: We’re looking for a Principal Software Engineer to serve as a technical leader for the Loss Data Capture platform. In this role, you will help influence the architecture and implementation of high‑scale services that power FNOL intake, loss data management, claim search, claims contact, coverage determination, telematics‑driven crash handling, and telematics crash communications. You’ll combine deep hands‑on engineering with system‑level thinking: shaping technical vision, guiding design and implementation, and mentoring engineers, while ensuring our services are secure, observable, resilient, and easy to integrate with.

Requirements

  • 8+ years of professional software engineering experience, including significant hands‑on work in Java/JVM with Spring/Spring Boot building production APIs and services.
  • Proven experience architecting and operating cloud‑hosted microservices at scale, ideally on platforms such as Cloud Foundry or Kubernetes with AWS‑backed services.
  • Strong background designing RESTful APIs and integration contracts for high‑throughput, low‑latency systems, including experience with OpenAPI/Swagger and API gateways (e.g., Apigee X or equivalent).
  • Solid experience with both relational databases (e.g., Oracle) and NoSQL/document datastores (e.g., MongoDB Atlas, DynamoDB, DocumentDB), including schema design, performance tuning, and data access patterns.
  • Hands‑on experience with event‑driven architectures and messaging platforms such as Kafka, including designing resilient publish/subscribe and streaming patterns.
  • Demonstrated ability to lead system design and architecture for complex, distributed systems, balancing functional requirements with scalability, resilience, and cost.
  • Strong understanding of security, privacy, and compliance in distributed systems (OAuth2, role‑based access, encryption in transit/at rest, PII handling) and experience participating in or leading threat modeling.
  • Experience with observability tooling (e.g., Splunk for logging, DataDog or similar for metrics and APM) and using data to drive performance, reliability, and capacity decisions.
  • Proven track record of technical leadership and mentoring, influencing cross‑team decisions, and partnering closely with product and business stakeholders.
  • Excellent communication skills, with the ability to explain complex technical concepts in clear, concise language to technical and non‑technical audiences.
  • Eight or more years of software engineering experience
  • Strong background in business operations and strategies, including global technology and financial services trends
  • Hands-on involvement with layered systems architectures, designs and shared software concepts
  • Familiarity with functional and system integration testing
  • Experience working in an agile environment
  • Excellent negotiation, facilitation and consensus-building capabilities
  • Openness and adaptability to respond to fast-moving circumstances
  • Proficiency in multiple object-oriented programming languages and tools
  • Excellent oral and written communication skills
  • Aptitude for working in teams
  • In-depth knowledge of diverse and emerging technologies, architectural concepts and principles
  • A deep understanding of layered solutions and designs
  • Awareness of policies regarding security and privacy
  • Understanding of backlog tracking, burndown metrics and incremental delivery
  • A Bachelor’s or Master’s degree in a technical or business discipline, or equivalent experience

Nice To Haves

  • Experience in insurance, financial services, or other highly regulated domains, especially claims, policy, or telematics‑driven products.
  • Hands‑on experience with Guidewire ClaimCenter, including integrating external services or data platforms with ClaimCenter in a production environment.
  • Familiarity with canonical data models and legacy system integration patterns.
  • Experience working with telemetry/telematics vendors or messaging/notification platforms (SMS, push, email) in high‑volume customer‑facing systems.

Responsibilities

  • Design, build, and support highly available, scalable microservices and REST APIs that power FNOL intake, loss data capture and updates, claim and contact search, coverage determination, telematics‑driven crash handling, and customer communications.
  • Participate in end‑to‑end solution architecture and system design for JVM/Spring services deployed on cloud platforms, integrating with Apigee X, Entra ID, Kafka, and NoSQL/relational datastores.
  • Own and optimize high‑volume, low‑latency APIs handling millions of requests per day, ensuring strong performance, reliability, and disaster‑recovery readiness.
  • Model and manage data across relational databases and NoSQL/document stores (e.g., MongoDB Atlas, DynamoDB, DocumentDB), balancing consistency, performance, observability, and cost.
  • Design and evolve event‑driven integrations (Kafka) that connect FNOL, claim registration, coverage, crash events, and outbound communications in a resilient, decoupled way.
  • Embed security and privacy by design by driving threat modeling, enforcing modern authentication and authorization patterns (e.g., OAuth2/Entra ID via Apigee X), and protecting PII throughout the loss data ecosystem.
  • Define and champion standards for logging, metrics, and tracing (e.g., Splunk, DataDog) to ensure services are observable, easy to debug, and easy to operate.
  • Collaborate with product, architecture, and claims business partners to translate the team’s loss data platform vision into clear technical roadmaps and well‑designed APIs.
  • Provide strong operational support for team owned services, including participating in the on‑call rotation, responding to production incidents and client requests (e.g., via Slack and Splunk/DataDog dashboards), driving root‑cause analysis, and creating follow‑up work to improve reliability.
  • Mentor and coach engineers on system design, cloud‑native practices, testing, and operational excellence; foster a culture of continuous improvement and learning.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service