Senior Elastic Stack Data Integration Engineer IRES - SSFB/HSV

AmentumColorado Springs, CO
23h$130,000 - $150,000Onsite

About The Position

The Senior Elastic Stack Data Integration Engineer supports the Missile Defense Agency (MDA) on the Integrated Research and Development for Enterprise Solutions (IRES) contract. The candidate will: Serve as the primary technical authority for designing, building, and maintaining data ingestion pipelines supporting Elastic SIEM. Focus on creating scalable, resilient Logstash architectures Develop advanced pipeline logic Normalize, enrich, and transform security telemetry Ensure reliable delivery of high-fidelity data to Elasticsearch.

Requirements

  • Must have 10, or more, years of general (full-time) work experience
  • May be reduced with completion of advanced education
  • Must have 5, or more, years of experience in log ingestion, data engineering, or SIEM pipeline development
  • Must have 2, or more, years of experience working in a management or leadership role, mentoring and guiding other team members.
  • Must have a strong background in Elastic Stack components (Elasticsearch, Kibana, Beats, Elastic Agent).
  • Must have experience with data ingestion, processing, and enrichment techniques.
  • Must have hands-on experience ingesting, processing, and normalizing diverse log types (Windows events, syslog, firewall logs, cloud telemetry, security tooling).
  • Must be proficient with Linux administration, system-level debugging, and CLI-based operations.
  • Must have a DoD 8570.01-M IAT Level II certification with Continuing Education (CE) - (CCNA-Security, CySA+, GICSP, GSEC, Security+ CE, CND, SSCP)
  • Must have an active DoD Secret Security Clearance
  • Must be able to obtain an active DoD Top Secret Security Clearance

Nice To Haves

  • Be an Elastic Certified Engineer or have relevant Elastic Stack certifications.
  • Have strong experience integrating Kafka, Redis, or other message bus technologies into ingestion workflows.
  • Be proficient with scripting in Python, Bash, or PowerShell for automation and data validation.
  • Have experience designing geo-distributed or multi-cluster ingestion architectures.
  • Have knowledge of threat intelligence ingestion, correlation data enrichment, and advanced ECS mapping.
  • Have experience with CI/CD pipelines, GitOps workflows, or Infrastructure-as-Code (Terraform, Ansible).
  • Be familiar with data quality assurance frameworks and pipeline testing methodologies.
  • Have knowledge of cloud-native logging architectures (AWS Firehose, Azure Event Hub, GCP Logging).

Responsibilities

  • Architect, build, and maintain Logstash pipelines to ingest and transform logs from diverse systems, including network devices, servers, cloud services, and security platforms.
  • Implement parsing, grok patterns, JSON transformations, conditional routing, enrichment logic, and ECS mapping.
  • Optimize pipeline performance, resiliency, and scalability (e.g., persistent queues, pipeline workers, memory tuning, load balancing).
  • Ensure all ingested data aligns to ECS (Elastic Common Schema) or internal schema requirements.
  • Implement data enrichment workflows (GeoIP, threat intel lookups, metadata injection).
  • Validate data completeness, integrity, and fidelity across ingestion flows.
  • Maintain and optimize Logstash clusters, including version management, scaling, tuning, and high-availability configurations.
  • Manage integrations with Beats, Elastic Agent, Kafka, syslog endpoints, and custom data collectors.
  • Monitor ingestion throughput, latency, and error rates; implement proactive alerting and troubleshooting processes.
  • Create and maintain technical documentation, including pipeline diagrams, data flow maps, runbooks, and schema references.
  • Establish enterprise standards for parsing, enrichment, normalization, and ingestion patterns.
  • Support internal and external audits by documenting data handling flows and pipeline logic.
  • Work closely with SIEM integration engineers to align pipelines with customer environments and logging requirements.
  • Partner with detection engineering teams to ensure data supports analytic coverage and rule development.
  • Collaborate with infrastructure and platform operations for deployment, scaling, and reliability engineering.

Benefits

  • Our health and welfare benefits are designed to support you and your priorities.
  • Offerings include:
  • Health, dental, and vision insurance
  • Paid time off and holidays
  • Retirement benefits (including 401(k) matching)
  • Educational reimbursement
  • Parental leave
  • Employee stock purchase plan
  • Tax-saving options
  • Disability and life insurance
  • Pet insurance
  • Note: Benefits may vary based on employment type, location, and applicable agreements.
  • Positions governed by a Collective Bargaining Agreement (CBA), the McNamara-O'Hara Service Contract Act (SCA), or other employment contracts may include different provisions/benefits.

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Education Level

No Education Listed

Number of Employees

5,001-10,000 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service