Cloud Engineering Intern

BambooHR
1dHybrid

About The Position

As a Cloud Engineering Intern Data Infrastructure, you’ll help build and scale the cloud systems that power large-scale data workloads. You’ll work hands-on with cloud architecture and Infrastructure as Code (IaC) to support secure, high-availability environments for data lakes, distributed databases, and streaming platforms. Partnering with experienced engineers, you’ll automate and optimize core data services like Kafka, AWS RDS, and Databricks, while learning how modern teams design infrastructure that is secure, reliable, and production-ready. This role is ideal for a systems-minded student excited about building security-by-default infrastructure and keeping mission-critical data running at peak performance.

Requirements

  • Currently a Junior pursuing a bachelor’s or master’s degree in a technical field such as Computer Science, Engineering, Information Systems, or a related discipline
  • Understanding of cloud computing concepts and hands-on experience with Amazon Web Services (AWS)
  • Experience using Infrastructure as Code (IaC) tools such as Terraform
  • Familiarity with deploying or supporting cloud-based data platforms (e.g., Apache Kafka, Databricks, AWS RDS/MySQL)
  • Experience building or working with automation for scaling, backups, or system health checks
  • Working knowledge of monitoring and observability tools such as Grafana, Prometheus, or CloudWatch
  • Understanding of networking and security fundamentals, including IAM, private networking, and encryption
  • Exposure to SQL and working with structured data
  • Exposure to or interest in using AI tools to enhance engineering workflows

Responsibilities

  • Develop and maintain Infrastructure as Code (IaC) templates and modules (e.g., Terraform) to automate the deployment of cloud-based data environments.
  • Configure and maintain core data infrastructure components, including data lakes, distributed databases (e.g., Databricks, MySQL), and messaging systems like Apache Kafka and caching layers (e.g. Memcached).
  • Implement automation for scaling, reclaiming, and expanding storage provisions to meet growing data needs.
  • Create and monitor scripts to track the health, performance, and uptime of data environments.
  • Develop dashboards and alerting systems (using tools like Grafana or Prometheus) to provide visibility into data platform stability.
  • Evaluate new data technologies.

Benefits

  • A Great Company Culture that has been recognized by multiple organizations like Inc, and Salt Lake Tribune
  • Paid time off for summer holidays and birthdays
  • 401k plans with up to 6% company match
  • Join us for company events during the summer
  • We pay for a one year subscription to Financial Peace University and you walk away with financial savvy and a bonus
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service