Data Science / Developer

Peraton
7d$112,000 - $179,000Remote

About The Position

Peraton is currently seeking to hire an experienced Data Scientist / Developer to join our Federal Strategic Cyber program. Location: Remote work allowed 100%; Must be able to come on-site as needed. Position Description: Design, build, and maintain Extract, Transform, and Load (ETL) pipelines using industry best practices. Work with data management teams to design transform scripts in languages like Python to convert raw data to standardized data schemas. Implement and manage containerization strategies using OpenShift/Kubernetes. Collaborate with development teams to integrate CI/CD pipelines into the software development lifecycle. Monitor and optimize system performance, reliability, and availability. Automate infrastructure and application deployment processes. Ensure security best practices are implemented throughout all DevOps procedures. Troubleshoot and resolve issues in development, test, and production environments.

Requirements

  • Bachelor’s degree and a minimum of 8 years related technical experience required.
  • An additional 4 years of experience may be substituted in lieu of a degree.
  • Proficiency in scripting and automation (Bash, Python, etc.).
  • Strong understanding of containerization principles (Docker, Podman, etc.).
  • Ability to work independently and collaboratively in a team environment.
  • Experience interfacing with clients, technical support personnel, and other technical professionals to ensure efficient and effective service.
  • U.S. Citizenship is required.
  • Ability to obtain a Top Secret security clearance with SCI eligibility required.
  • In addition, selected candidate must be able to obtain and maintain a favorably adjudicated DHS background investigation to start and for continued employment.

Nice To Haves

  • Experience with the Databricks Data and AI platform.
  • Proven experience with OpenShift and Kubernetes in production environments.
  • Experience with Openshift/Kubernetes and other container-based platforms.
  • Experience implementing and utilizing AI models to explore and refine data.
  • Experience training task specific AI models for normalization and alerting.
  • Experience with SAFe Agile testing environments.
  • Experience using OpenShift to run microservice architectures designed to manage scale of data transformation processes.

Responsibilities

  • Design, build, and maintain Extract, Transform, and Load (ETL) pipelines using industry best practices.
  • Work with data management teams to design transform scripts in languages like Python to convert raw data to standardized data schemas.
  • Implement and manage containerization strategies using OpenShift/Kubernetes.
  • Collaborate with development teams to integrate CI/CD pipelines into the software development lifecycle.
  • Monitor and optimize system performance, reliability, and availability.
  • Automate infrastructure and application deployment processes.
  • Ensure security best practices are implemented throughout all DevOps procedures.
  • Troubleshoot and resolve issues in development, test, and production environments.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service