Data Science Intern

Global PartnersWaltham, MA
4d$16 - $20

About The Position

We're looking for a Data Science Summer Intern who's excited to jump into some of the most interesting areas of AI today. You'll spend time experimenting with prompt engineering, figuring out how to evaluate large language models, and helping design multi-agent systems that work together to solve real problems. You'll also work alongside data scientists and ML engineers on deep learning projects like time series forecasting, building out our feature store, and running model experiments. The goal is to give you hands-on experience that feels both practical and forward-looking, so you leave with a stronger skill set and a clear sense of how AI gets applied in the real world. At Global Partners, business starts with people. Since 1933, we've believed in taking care of our customers, our guests, our communities, and each other—and that belief continues to guide us. The Global Spirit is how we work to fuel that long term commitment to success. As a Fortune 500 company with 90+ years of experience, we're proud to fuel communities—responsibly and sustainably. We show up every day with grit, passion, and purpose—anticipating needs, building lasting relationships, and creating shared value. YOUR ROLE, YOUR IMPACT Design and run prompt engineering experiments, exploring techniques, templates, and evaluation methods to improve LLM outputs, and extend this work into testing multi-agent workflows for tasks like reasoning, summarization, and decision support. Collaborate with data scientists and ML engineers on deep learning projects for time series forecasting, contributing to feature engineering, model training, hyperparameter tuning, and backtesting. Develop and maintain data pipelines and feature store components, ensuring features and datasets are clean, standardized, reusable, and well-documented. Prototype and evaluate models using traditional ML and deep learning approaches, compare against baselines, and apply MLOps practices like experiment tracking, reproducibility, and containerization to prepare successful prototypes for production. Work in cloud environments (AWS, GCP, or Azure) to train and scale models, and clearly document workflows, experiments, and results for team adoption and future use.

Requirements

  • Currently pursuing a Bachelor's or Master's in Computer Science, Data Science, Machine Learning, or related field; prior co-op/internship/full-time experience in DS/ML or software engineering is a plus.
  • Strong programming skills in Python, with experience in ML libraries such as PyTorch, TensorFlow, and Scikit-learn, and proficiency in data preprocessing using Pandas, NumPy, and (optionally) Spark.
  • Solid understanding of machine learning concepts, including model training, evaluation, backtesting, and feature engineering.
  • Familiarity with software engineering and MLOps practices, including Git for version control, testing frameworks, experiment tracking (MLflow, Weights & Biases), containerization with Docker, and reproducibility standards.
  • Comfort working in cloud environments (AWS, GCP, or Azure) and with database systems (SQL/NoSQL), including contributing to reusable components like a feature store.
  • Strong communication skills with the ability to clearly document workflows, tools, and findings for team adoption.

Nice To Haves

  • Hands-on exposure to GenAI and agent frameworks (LangChain, LangGraph, CrewAI), including platforms and tooling such as Amazon Bedrock Agents, MCP servers, A2A patterns/frameworks, and evaluation tools like Braintrust.

Responsibilities

  • Design and run prompt engineering experiments, exploring techniques, templates, and evaluation methods to improve LLM outputs, and extend this work into testing multi-agent workflows for tasks like reasoning, summarization, and decision support.
  • Collaborate with data scientists and ML engineers on deep learning projects for time series forecasting, contributing to feature engineering, model training, hyperparameter tuning, and backtesting.
  • Develop and maintain data pipelines and feature store components, ensuring features and datasets are clean, standardized, reusable, and well-documented.
  • Prototype and evaluate models using traditional ML and deep learning approaches, compare against baselines, and apply MLOps practices like experiment tracking, reproducibility, and containerization to prepare successful prototypes for production.
  • Work in cloud environments (AWS, GCP, or Azure) to train and scale models, and clearly document workflows, experiments, and results for team adoption and future use.

Benefits

  • Coins! We offer competitive salaries and opportunities for growth.
  • We have an amazing Talent Development Team who create trainings for growth and job development.
  • Health & Wellness - Medical, Dental, Visions and Life Insurance. Along with additional wellness support.
  • The Road Ahead - We offer 401k and a match component!
  • Professional Development - We provide tuition reimbursement; this benefit is offered after 6 months of service.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service