Data Engineer

Parsons CorporationReston, VA

About The Position

Parsons is looking for an amazingly talented Data Engineer to join our team! In this role, you will support the design, implementation, and sustainment of the data plane for a large-scale distributed system spanning microservices, event processing, search/analytics, enterprise reporting, and operational telemetry. You will work across both cloud-native and constrained edge-style deployments, helping ensure data is collected, transformed, moved, searched, and visualized reliably even in challenging operating environments. This role is not a traditional database administrator position. Instead, it is focused on data engineering across distributed microservices, event pipelines, search/analytics platforms, enterprise data integration, and resilient/manual data movement workflows.

Requirements

  • Bachelor’s degree in Computer Science, Software Engineering, Data Engineering, Information Systems, or related technical field. 4 Additional years of experience can substitute for a degree.
  • 10+ years of software and/or data engineering
  • Strong experience in data engineering for distributed applications and microservices-based systems
  • Hands-on experience with: Java-based service ecosystems MongoDB Elasticsearch / ELK RabbitMQ or similar message/event platforms
  • Good understanding of distributed systems concepts including: eventual consistency retries and idempotency partition tolerance and degraded/offline operation data reconciliation and replay
  • Experience developing and supporting data pipelines across both operational systems and analytics/reporting environments
  • Experience designing robust data movement mechanisms, including fallback/manual transfer workflows for disrupted environments
  • Experience with Power BI and/or Power Automate integration pipelines, including data shaping and feed preparation for dashboards and workflow automation
  • Experience working with telemetry/log/metric pipelines and operational dashboarding concepts
  • Ability to write clean technical documentation for schemas, data flows, and operating procedures
  • Strong oral and written communication skills and the ability to work cooperatively and effectively as a team member
  • Domestic or international travel may be required.
  • An active Top Secret SCI security clearance is required for this position.

Nice To Haves

  • Master’s degree in Data Engineering, Computer Science, Analytics, or related field
  • Experience with service platform tooling such as Consul, Nomad, and Vault
  • Familiarity with legacy-to-modern data transition patterns, including relational to document-oriented migration approaches
  • Experience supporting systems deployed in constrained environments with intermittent connectivity and variable infrastructure quality
  • Experience with PostgreSQL and Grails/Java legacy systems in migration or sustainment contexts
  • Experience with search engineering, semantic search, or cognitive search implementations using Elasticsearch
  • Experience building data feeds that support operational dashboards, executive reporting, and data science workflows
  • Familiarity with observability tooling transitions and telemetry normalization across multiple sources
  • Ability to identify data architecture improvements, develop prototypes, and help build the case for operational enhancements
  • Experience training users or technical teams as new data capabilities move into production

Responsibilities

  • Designing, developing, and maintaining the data plane for a distributed microservices ecosystem consisting of: approximately 25 core Java microservices approximately 25 ancillary integration microservices connecting to external systems
  • Supporting data flows across a modern service platform built around: MongoDB for operational data storage Elasticsearch / ELK for analytics, search, cognitive/semantic search use cases, and telemetry analysis RabbitMQ for event-driven processing and message distribution service orchestration and security components such as Consul, Nomad, and Vault
  • Engineering data solutions that can operate across a wide range of deployment models: very small, constrained environments (e.g., a few bare-metal small-form-factor servers) larger hybrid/cloud-native environments that can scale to hundreds of nodes
  • Building and optimizing data pipelines that support: operational application data flows telemetry ingestion (hot/warm/cold paths) dashboard/reporting data feeds enterprise data integration across inventory, asset, and financial systems
  • Developing resilient mechanisms for manual or semi-manual data extraction, transfer, and re-ingestion when automated connectivity is degraded or unavailable due to technical, physical, or geopolitical disruptions
  • Supporting data migration activities between legacy and modernized platforms: legacy platform built with Java/Grails using PostgreSQL current distributed platform using MongoDB working within existing migration processes and improving tooling, reliability, and observability where appropriate
  • Building and maintaining integrations that feed Power BI dashboards, Power Automate workflows, and downstream data science/analytics use cases
  • Enabling operational visibility through telemetry engineering: ingesting and transforming logs, events, and metrics supporting observability pipelines based on ELK and current/transitioning monitoring tooling helping shape future-state consolidation toward a more ELK-centric observability model
  • Collaborating closely with software engineers, platform engineers, SRE/operations, data scientists, and enterprise stakeholders to operationalize data and deliver reliable insights
  • Documenting data models, interfaces, schemas, transformations, operational procedures, and recovery workflows

Benefits

  • medical
  • dental
  • vision
  • paid time off
  • Employee Stock Ownership Plan (ESOP)
  • 401(k)
  • life insurance
  • flexible work schedules
  • holidays
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service