Senior Elastic Stack Data Integration Engineer

LaunchTechColorado Springs, CO
15h

About The Position

LaunchTech is seeking a Senior Elastic Stack Data Integration Engineer to support the Missile Defense Agency (MDA) on the Integrated Research and Development for Enterprise Solutions (IRES) contract. In this role, you will serve as the primary technical authority for designing, building, and maintaining data ingestion pipelines supporting Elastic SIEM . The position focuses on architecting scalable Logstash ingestion frameworks, developing advanced pipeline logic, normalizing and enriching security telemetry, and ensuring reliable delivery of high-fidelity data into Elasticsearch for mission-critical cyber operations. What You’ll Be Doing The Senior Elastic Stack Data Integration Engineer will: Architect, build, and maintain Logstash pipelines to ingest and transform logs from diverse systems including network devices, servers, cloud services, and security platforms Implement advanced parsing, grok patterns, JSON transformations, conditional routing, enrichment logic, and ECS mapping Optimize pipeline performance, resiliency, and scalability through persistent queues, pipeline workers, memory tuning, and load balancing Ensure ingested data aligns with Elastic Common Schema (ECS) or internal schema standards Implement enrichment workflows including GeoIP enrichment, threat intelligence lookups, and metadata injection Validate data completeness, integrity, and fidelity across ingestion pipelines Maintain and optimize Logstash clusters including version management, scaling, tuning, and high-availability configurations Manage integrations with Beats, Elastic Agent, Kafka, syslog endpoints, and custom data collectors Monitor ingestion throughput, latency, and error rates while implementing proactive alerting and troubleshooting processes Create and maintain documentation including pipeline diagrams, data flow maps, runbooks, and schema references Establish enterprise standards for parsing, enrichment, normalization, and ingestion patterns Support internal and external audits by documenting data handling flows and pipeline logic Work closely with SIEM integration engineers to align ingestion pipelines with customer environments and logging requirements Partner with detection engineering teams to ensure telemetry supports analytic coverage and detection rule development Collaborate with infrastructure and platform operations teams to support deployment, scaling, and reliability engineering efforts Mentor junior engineers and support technical leadership across pipeline architecture decisions What You Bring

Requirements

  • Must have 10 or more years of general (full-time) work experience
  • Experience requirement may be reduced with completion of advanced education
  • Must have 5 or more years of experience in log ingestion, data engineering, or SIEM pipeline development
  • Must have 2 or more years of experience in a lead or senior role mentoring and guiding technical teams
  • Must have strong experience with Elastic Stack components including Elasticsearch, Kibana, Beats, and Elastic Agent
  • Must have experience with data ingestion, transformation, and enrichment techniques
  • Must have hands-on experience ingesting and normalizing diverse log types including Windows events, syslog, firewall logs, cloud telemetry, and security tooling
  • Must be proficient with Linux administration, system debugging, and command-line operations
  • Must possess a DoD 8570.01-M IAT Level II certification with CE (Security+ CE, CySA+, GSEC, SSCP, CND, CCNA-Security, or equivalent)
  • Must have an active DoD Secret Security Clearance
  • Must be able to obtain a DoD Top Secret Security Clearance

Nice To Haves

  • Elastic Certified Engineer or relevant Elastic Stack certifications
  • Experience integrating Kafka, Redis, or other message bus technologies into ingestion pipelines
  • Proficiency in scripting languages such as Python, Bash, or PowerShell for automation and data validation
  • Experience designing geo-distributed or multi-cluster ingestion architectures
  • Knowledge of threat intelligence ingestion, correlation enrichment, and advanced ECS mapping
  • Experience with CI/CD pipelines, GitOps workflows, or Infrastructure-as-Code tools such as Terraform or Ansible
  • Familiarity with data quality assurance frameworks and pipeline testing methodologies
  • Knowledge of cloud-native logging architectures such as AWS Firehose, Azure Event Hub, or Google Cloud Logging

Responsibilities

  • Architect, build, and maintain Logstash pipelines to ingest and transform logs from diverse systems including network devices, servers, cloud services, and security platforms
  • Implement advanced parsing, grok patterns, JSON transformations, conditional routing, enrichment logic, and ECS mapping
  • Optimize pipeline performance, resiliency, and scalability through persistent queues, pipeline workers, memory tuning, and load balancing
  • Ensure ingested data aligns with Elastic Common Schema (ECS) or internal schema standards
  • Implement enrichment workflows including GeoIP enrichment, threat intelligence lookups, and metadata injection
  • Validate data completeness, integrity, and fidelity across ingestion pipelines
  • Maintain and optimize Logstash clusters including version management, scaling, tuning, and high-availability configurations
  • Manage integrations with Beats, Elastic Agent, Kafka, syslog endpoints, and custom data collectors
  • Monitor ingestion throughput, latency, and error rates while implementing proactive alerting and troubleshooting processes
  • Create and maintain documentation including pipeline diagrams, data flow maps, runbooks, and schema references
  • Establish enterprise standards for parsing, enrichment, normalization, and ingestion patterns
  • Support internal and external audits by documenting data handling flows and pipeline logic
  • Work closely with SIEM integration engineers to align ingestion pipelines with customer environments and logging requirements
  • Partner with detection engineering teams to ensure telemetry supports analytic coverage and detection rule development
  • Collaborate with infrastructure and platform operations teams to support deployment, scaling, and reliability engineering efforts
  • Mentor junior engineers and support technical leadership across pipeline architecture decisions

Benefits

  • Medical, Dental, and Vision
  • 401(k) with company match
  • Paid Time Off (PTO)
  • Professional development opportunities
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service