DevSecOps Engineering Intern

Alpha Omega
Onsite

About The Position

The DevSecOps Engineering Intern will support the development and maintenance of secure, AI-driven application development pipelines within Alpha Omega's Continuum AF Secure track. This role focuses on integrating security-by-design practices and AI-powered automation into application stacks used for federal government solutions. The intern will collaborate with the internal engineering team to support AIOps-driven security automation, AI-assisted vulnerability detection, intelligent compliance monitoring, and compliance-aligned development practices. This internship provides hands-on experience with secure software development, AI-augmented DevSecOps pipelines, AI coding assistants, AI-driven threat modeling, and federal security frameworks such as NIST 800-53, FedRAMP, and the NIST AI Risk Management Framework (AI RMF). This role also helps strengthen Alpha Omega's ability to maintain a production-ready, AI-enhanced secure stack used for technical proposal demonstrations, client solutions, and rapid prototype development.

Requirements

  • Proficiency in at least one backend language such as: Python, Node.js, Java
  • Familiarity with frontend frameworks (React preferred)
  • Foundational knowledge of secure coding practices and OWASP Top 10
  • Understanding of static and dynamic application security testing (SAST/DAST)
  • Experience using AI coding assistants (e.g., GitHub Copilot, Cursor, Amazon CodeWhisperer) for code generation, debugging, and refactoring
  • Basic understanding of prompt engineering for security-focused AI tools and LLM-based code analysis
  • Basic experience with cloud platforms (AWS or Azure preferred)
  • Familiarity with containerization tools such as Docker or Kubernetes
  • Exposure to AI/ML or LLM-based tools used for automation/coding/operations
  • Familiarity with AI/ML concepts including supervised/unsupervised learning, NLP, and LLM architectures as they apply to security automation
  • Exposure to AIOps platforms or concepts such as AI-driven monitoring, intelligent alerting, predictive analytics for infrastructure, and automated remediation
  • Understanding of AI security risks including prompt injection, model evasion, data poisoning, and adversarial machine learning
  • Experience or coursework involving AI-powered coding/security tools
  • Familiarity with LLM-based code review and vulnerability analysis workflows
  • Familiarity with federal security frameworks such as: NIST 800-53, FedRAMP, FISMA
  • Academic or project-based experience is acceptable.

Nice To Haves

  • Currently pursuing or recently completed a Bachelor’s or Master’s degree in: Computer Science, Software Engineering, Human-Centered Design, Information Technology, or a related technical field

Responsibilities

  • Support development and enhancement of AI-augmented DevSecOps pipelines within the Continuum AF Secure track
  • Assist with integration of AI-assisted SAST/DAST tools (e.g., GitHub Copilot Autofix, Semgrep with LLM triage, Snyk AI, SonarQube AI) into internal development pipelines
  • Leverage AI coding assistants (e.g., GitHub Copilot, Amazon CodeWhisperer, Tabnine) to accelerate secure code generation while validating AI-generated code for security compliance
  • Support AIOps initiatives including AI-driven log analysis, anomaly detection, automated incident response, and intelligent alerting within CI/CD environments
  • Assist in developing and fine-tuning AI/ML models for automated vulnerability classification, threat prioritization, and security pattern recognition
  • Evaluate and implement AI-powered security scanning tools that use machine learning to reduce false positives and improve detection accuracy
  • Contribute to development of secure application stack components used for demonstrations, prototypes, and proposal support
  • Help document NIST 800-53, FedRAMP, and NIST AI RMF-aligned security control implementation patterns for internal reuse across engagements
  • Support AI security governance by helping assess risks associated with AI/ML model deployment, including adversarial attack surfaces, data poisoning, and model integrity
  • Support automation efforts to reduce manual security review processes through AI-driven pipeline enhancements and intelligent workflow orchestration
  • Assist with AI-powered vulnerability detection, triage, and remediation efforts using LLM-based analysis
  • Collaborate with engineering teams to maintain secure coding practices, AI-safe development standards, and compliance-ready infrastructure patterns
  • Participate in technical discussions related to secure SDLC, compliance automation, cloud security, and responsible AI integration in federal environments
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service