About The Position

Peraton is currently seeking to hire an experienced Data Engineer / Backend Developer to join our Federal Strategic Cyber program. Location: Arlington, VA. Hybrid telework/remote work environment. Must be able to come on-site as needed. Position Description: Design, build, and maintain Extract, Transform, and Load (ETL) pipelines using industry best practices. Work with data management teams to design transform scripts in languages like Python to convert raw data to standardized data schemas. Implement and manage containerization strategies using OpenShift/Kubernetes. Collaborate with development teams to integrate CI/CD pipelines into the software development lifecycle. Monitor and optimize system performance, reliability, and availability. Automate infrastructure and application deployment processes. Ensure security best practices are implemented throughout all DevOps procedures. Troubleshoot and resolve issues in development, test, and production environments.

Requirements

  • Bachelor’s degree and a minimum of 8 years related technical experience required. An additional 4 years of experience may be substituted in lieu of a degree.
  • Proficiency in scripting and automation (Bash, Python, etc.).
  • Proven experience with OpenShift and Kubernetes in production environments.
  • Strong understanding of containerization principles (Docker, Podman, etc.).
  • Ability to work independently and collaboratively in a team environment.
  • Experience with Openshift/Kubernetes and other container-based platforms.
  • Experience interfacing with clients, technical support personnel, and other technical professionals to ensure efficient and effective service.
  • U.S. Citizenship is required.
  • Ability to obtain a Top Secret security clearance with SCI eligibility required.
  • In addition, selected candidate must be able to obtain and maintain a favorably adjudicated DHS background investigation to start and for continued employment.

Nice To Haves

  • Experience with the Databricks Data and AI platform.
  • Experience implementing and utilizing AI models to explore and refine data.
  • Experience training task specific AI models for normalization and alerting.
  • Experience with SAFe Agile testing environments.
  • Experience using OpenShift to run microservice architectures designed to manage scale of data transformation processes.

Responsibilities

  • Design, build, and maintain Extract, Transform, and Load (ETL) pipelines using industry best practices.
  • Work with data management teams to design transform scripts in languages like Python to convert raw data to standardized data schemas.
  • Implement and manage containerization strategies using OpenShift/Kubernetes.
  • Collaborate with development teams to integrate CI/CD pipelines into the software development lifecycle.
  • Monitor and optimize system performance, reliability, and availability.
  • Automate infrastructure and application deployment processes.
  • Ensure security best practices are implemented throughout all DevOps procedures.
  • Troubleshoot and resolve issues in development, test, and production environments.

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Number of Employees

5,001-10,000 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service