Senior Elastic Stack Data Integration Engineer - DOD

InnovimColorado Springs, CO
2d$130,000 - $150,000

About The Position

INNOVIM is seeking a Senior Elastic Stack Data Integration Engineer supports the Missile Defense Agency (MDA) on the Integrated Research and Development for Enterprise Solutions (IRES) contract. Location: Redstone Arsenal, Huntsville, AL OR Schriever, SFB, Colorado Springs, CO Relocation Assistance: NONE Position Closes: 1/25/26 The candidate will: Serve as the primary technical authority for designing, building, and maintaining data ingestion pipelines supporting Elastic SIEM. Focus on creating scalable, resilient Logstash architectures Develop advanced pipeline logic Normalize, enrich, and transform security telemetry Ensure reliable delivery of high-fidelity data to Elasticsearch

Requirements

  • Must have 5, or more, years of experience in log ingestion, data engineering, or SIEM pipeline development
  • Must have 2, or more, years of experience working in a management or leadership role, mentoring and guiding other team members.
  • Must have a strong background in Elastic Stack components (Elasticsearch, Kibana, Beats, Elastic Agent).
  • Must have experience with data ingestion, processing, and enrichment techniques.
  • Must be proficient with Linux administration, system-level debugging, and CLI-based operations.
  • Must have a DoD 8570.01-M IAT Level II certification with Continuing Education (CE) - (CCNA-Security, CySA+, GICSP, GSEC, Security+ CE, CND, SSCP)
  • Must have an active DoD Secret Security Clearance
  • Must be able to obtain an active DoD Top Secret Security Clearance
  • Have a deep command of Logstash architecture, patterns, and performance optimization.
  • Have a mastery of parsing, enrichment, normalization, and ECS alignment.
  • Have a strong understanding of network protocols, logging patterns, and telemetry generation from enterprise systems.
  • Have advanced troubleshooting skills across data ingestion, pipeline logic, and Elastic Stack processing layers.
  • Be able to design scalable, HA ingestion workflows with clear operational boundaries.
  • Be able to conduct data modeling, schema design, and transformation mapping.
  • Be effective at interfacing with multiple teams, gathering requirements, and aligning pipeline designs with SIEM analytics needs.
  • Be focused on reliability, maintainability, and observability across all pipeline components.
  • Have strong attention to detail and a disciplined approach to documentation, versioning, and configuration management.
  • Be able to work independently, drive pipeline architecture decisions, and mentor junior engineers.
  • Have strong documentation, workflow diagramming, and communication skills.

Nice To Haves

  • Be an Elastic Certified Engineer or have relevant Elastic Stack certifications.
  • Have strong experience integrating Kafka, Redis, or other message bus technologies into ingestion workflows.
  • Be proficient with scripting in Python, Bash, or PowerShell for automation and data validation.
  • Have experience designing geo-distributed or multi-cluster ingestion architectures.
  • Have knowledge of threat intelligence ingestion, correlation data enrichment, and advanced ECS mapping.
  • Have experience with CI/CD pipelines, GitOps workflows, or Infrastructure-as-Code (Terraform, Ansible).
  • Be familiar with data quality assurance frameworks and pipeline testing methodologies
  • Have knowledge of cloud-native logging architectures (AWS Firehose, Azure Event Hub, GCP Logging).

Responsibilities

  • Architect, build, and maintain Logstash pipelines to ingest and transform logs from diverse systems, including network devices, servers, cloud services, and security platforms.
  • Implement parsing, grok patterns, JSON transformations, conditional routing, enrichment logic, and ECS mapping.
  • Optimize pipeline performance, resiliency, and scalability (e.g., persistent queues, pipeline workers, memory tuning, load balancing).
  • Ensure all ingested data aligns to ECS (Elastic Common Schema) or internal schema requirements.
  • Implement data enrichment workflows (GeoIP, threat intel lookups, metadata injection)
  • Validate data completeness, integrity, and fidelity across ingestion flows.
  • Maintain and optimize Logstash clusters, including version management, scaling, tuning, and high-availability configurations.
  • Manage integrations with Beats, Elastic Agent, Kafka, syslog endpoints, and custom data collectors.
  • Monitor ingestion throughput, latency, and error rates; implement proactive alerting and troubleshooting processes.
  • Create and maintain technical documentation, including pipeline diagrams, data flow maps, runbooks, and schema references.
  • Establish enterprise standards for parsing, enrichment, normalization, and ingestion patterns.
  • Support internal and external audits by documenting data handling flows and pipeline logic.
  • Work closely with SIEM integration engineers to align pipelines with customer environments and logging requirements.
  • Partner with detection engineering teams to ensure data supports analytic coverage and rule development.
  • Collaborate with infrastructure and platform operations for deployment, scaling, and reliability engineering.

Benefits

  • Nationwide Medical/Dental/Vision insurance programs
  • life insurance
  • matching 401k contribution
  • Educational/Training support

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Education Level

No Education Listed

Number of Employees

101-250 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service