About The Position

Peraton is seeking a highly experienced Senior Systems Engineer to lead the design, development, and implementation of our enterprise-scale data models and ontologies. This role is critical in transforming disparate data sources into a cohesive, searchable, and semantically rich knowledge graph. You will serve as a technical authority, guiding the evolution of our "Entity-Evidence" frameworks and ensuring the reliability of our distributed data services. Key Responsibilities: Ontology & Model Engineering: Lead the design and implementation of complex domain ontologies and logical data models to support advanced analytics and discovery. Develop and refine schemas for graph databases (Neo4j), ensuring optimal relationship modeling and query performance. Establish standards for entity resolution, taxonomy management, and metadata tagging across the enterprise. Oversee the ingestion of "EntitySourceDocuments" and associate evidence into unified semantic structures. Systems Architecture & Integration: Design and manage high-throughput data pipelines using distributed streaming platforms like Apache Kafka. Implement sophisticated message-key strategies and topic compaction to ensure data integrity and system efficiency. Orchestrate containerized services within Kubernetes (K8s), leveraging Custom Resource Definitions (CRDs) and Operators for complex stateful workloads. Integrate diverse data assets—from relational stores to unstructured documents—into a synchronized, scalable backend architecture. Technical Leadership & Governance: Provide expert-level support for model implementation, ensuring alignment between conceptual models and physical deployments. Mentor junior engineers on best practices in Java/Spring Boot development, API documentation (OpenAPI/Swagger), and cloud-native patterns. Collaborate with data scientists and stakeholders to translate business requirements into technical specifications and architectural diagrams. Reliability & Performance: Conduct deep-dive troubleshooting for complex system bottlenecks, particularly within Kafka clusters and graph database clusters. Define and implement monitoring and observability standards using Prometheus, Grafana, and ELK. Ensure the long-term scalability and maintainability of the "arbitranch" and "KEA" service ecosystems.

Requirements

  • Active TS/SCI clearance with polygraph
  • Twenty (20) years of Systems Engineering experience on programs of similar scope, type, and complexity, with demonstrated expertise in planning and leading Systems Engineering efforts
  • Bachelor’s degree in Systems Engineering, Computer Science, Information Systems, Engineering Science, Engineering Management, or a related discipline from an accredited institution; in lieu of a degree, an additional five (5) years of relevant Systems Engineering experience may be substituted.
  • Extensive experience in Data Architecture, with a focus on graph modeling and semantic technologies.
  • Mastery of Java and the Spring Boot ecosystem for building enterprise-grade backend services.
  • Deep technical proficiency with Apache Kafka, including cluster management, topic configuration, and performance tuning.
  • Proven track record of deploying and managing production workloads on Kubernetes and cloud platforms (AWS).
  • Experience with automated CI/CD workflows and Infrastructure as Code (Terraform/CloudFormation).
  • Strong communication skills with the ability to convey complex architectural concepts to both technical and executive audiences.

Nice To Haves

  • Expert knowledge of Neo4j (Cypher, APOC, GDS) and graph-based data science.
  • Experience with Ontology Web Language (OWL) or Resource Description Framework (RDF).
  • Understanding of social media sentiment analysis and its application to financial or predictive modeling.
  • Familiarity with large-scale data processing using Apache Spark.
  • Advanced degree in Computer Science, Systems Engineering, or a related quantitative field.

Responsibilities

  • Lead the design and implementation of complex domain ontologies and logical data models to support advanced analytics and discovery.
  • Develop and refine schemas for graph databases (Neo4j), ensuring optimal relationship modeling and query performance.
  • Establish standards for entity resolution, taxonomy management, and metadata tagging across the enterprise.
  • Oversee the ingestion of "EntitySourceDocuments" and associate evidence into unified semantic structures.
  • Design and manage high-throughput data pipelines using distributed streaming platforms like Apache Kafka.
  • Implement sophisticated message-key strategies and topic compaction to ensure data integrity and system efficiency.
  • Orchestrate containerized services within Kubernetes (K8s), leveraging Custom Resource Definitions (CRDs) and Operators for complex stateful workloads.
  • Integrate diverse data assets—from relational stores to unstructured documents—into a synchronized, scalable backend architecture.
  • Provide expert-level support for model implementation, ensuring alignment between conceptual models and physical deployments.
  • Mentor junior engineers on best practices in Java/Spring Boot development, API documentation (OpenAPI/Swagger), and cloud-native patterns.
  • Collaborate with data scientists and stakeholders to translate business requirements into technical specifications and architectural diagrams.
  • Conduct deep-dive troubleshooting for complex system bottlenecks, particularly within Kafka clusters and graph database clusters.
  • Define and implement monitoring and observability standards using Prometheus, Grafana, and ELK.
  • Ensure the long-term scalability and maintainability of the "arbitranch" and "KEA" service ecosystems.

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Number of Employees

5,001-10,000 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service