Systems Engineering Intern

Intrepid Fiber NetworksBroomfield, CO
4d

About The Position

We at Intrepid Fiber believe everyone has the right to access the Internet, no matter where they live or their socioeconomic status. Our vision is to become the nation’s most prolific developer of fiber-to-the-home infrastructure. Intrepid is working with local municipalities to integrate the digital infrastructure necessary to afford consumers more choices, more accessibility, and better value by connecting to the Internet services that enable them to live their best lives. For more information, please visit our website at: https://www.intrepidfiber.com SUMMARY The Systems Engineering Intern will support the design and development of scalable data solutions focused on ingesting, processing, and analyzing large volumes of network and platform log data. This role provides hands-on experience working with real-world datasets, including OLT syslogs, audit logs, and ServiceNow API data, to improve operational visibility and system reliability. The intern will assist in building foundational data pipelines and exploring modern technologies, including AI/ML techniques, to enhance log analysis and automation. This position offers exposure to big data concepts, distributed systems, and emerging approaches to intelligent operations within a growing network environment.

Requirements

  • Foundational programming skills (Python, JavaScript, or similar)
  • Basic understanding of data structures, data processing, and analysis techniques
  • Interest in distributed systems, big data technologies, and cloud-based architectures
  • Exposure to or curiosity about AI/ML concepts and their practical applications
  • Strong analytical and problem-solving skills
  • Ability to learn quickly and adapt to new tools and technologies
  • Effective communication skills and ability to work within a collaborative team environment
  • Strong attention to detail and organizational skills
  • Self-motivated with the ability to manage tasks and priorities in a structured internship setting

Nice To Haves

  • Familiarity with APIs, log data, or system-generated data (preferred but not required)

Responsibilities

  • Assist in designing and implementing data ingestion pipelines for high-volume log sources such as network syslogs, audit logs, and API data
  • Support the development of processes for normalizing, structuring, and storing large datasets for long-term analysis
  • Analyze structured and unstructured data to identify trends, anomalies, and opportunities for improved operational insight
  • Collaborate with engineering teams to prototype and evaluate scalable data processing solutions
  • Contribute to the development of frameworks for centralized log aggregation and archival
  • Explore and test AI/ML techniques for log classification, filtering, anomaly detection, and noise reduction
  • Assist in integrating data from various systems, including network infrastructure and enterprise platforms like ServiceNow
  • Document data workflows, architectures, and best practices to support future engineering efforts
  • Support troubleshooting and analysis efforts by leveraging log data to identify root causes and system behaviors
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service