AI/ML Engineer/Data Scientist / Public Trust

PeratonWarrenton, VA
Onsite

About The Position

Peraton is currently seeking to hire an experienced AI/ML Data Scientist to join our Federal Strategic Cyber program. This position requires expertise and fluency in AI/ML techniques, designing and building ETL pipelines, working with data management teams to convert raw data to standardized data schemas, implementing containerization strategies using OpenShift/Kubernetes, collaborating with development teams to integrate CI/CD pipelines, monitoring and optimizing system performance, automating deployment processes, ensuring security best practices, and troubleshooting issues in various environments.

Requirements

  • Bachelor’s degree and a minimum of 8 years related technical experience required. An additional 4 years of experience may be substituted in lieu of a degree.
  • Demonstrated fluency in AI/ML techniques.
  • Proficiency in scripting and automation (Bash, Python, etc.).
  • Strong understanding of containerization principles (Docker, Podman, etc.).
  • Ability to work independently and collaboratively in a team environment.
  • Experience interfacing with clients, technical support personnel, and other technical professionals to ensure efficient and effective service.
  • U.S. Citizenship is required.
  • Active Public Trust Clearance.
  • Willingness and ability to travel 10-25%.

Nice To Haves

  • Experience with the Databricks Data and AI platform.
  • Proven experience with OpenShift and Kubernetes in production environments.
  • Experience with Openshift/Kubernetes and other container-based platforms.
  • Experience implementing and utilizing AI models to explore and refine data.
  • Experience training task specific AI models for normalization and alerting.
  • Experience with SAFe Agile testing environments.
  • Experience using OpenShift to run microservice architectures designed to manage scale of data transformation processes.

Responsibilities

  • Design, build, and maintain Extract, Transform, and Load (ETL) pipelines using industry best practices.
  • Work with data management teams to design transform scripts in languages like Python to convert raw data to standardized data schemas.
  • Implement and manage containerization strategies using OpenShift/Kubernetes.
  • Collaborate with development teams to integrate CI/CD pipelines into the software development lifecycle.
  • Monitor and optimize system performance, reliability, and availability.
  • Automate infrastructure and application deployment processes.
  • Ensure security best practices are implemented throughout all DevOps procedures.
  • Troubleshoot and resolve issues in development, test, and production environments.

Benefits

  • Overtime
  • Shift differential
  • Discretionary bonus

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Senior

Education Level

Associate degree

Number of Employees

5,001-10,000 employees

© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service