Intern - AI Engineering LLM

Veolia Environnement SAParamus, NJ
5h$21 - $25

About The Position

Student Exploration and Experience Development (SEED) is a 12-week internship opportunity at Veolia for students to gain hands-on experience in sustainability and ecological transformation. They will work on real-world projects, receive mentorship from industry professionals, and participate in workshops and networking events. The program aims to nurture talent, promote innovation, and foster meaningful connections between students and industry professionals. Overall, the SEED program provides students with the skills, knowledge, and connections needed to make a positive impact in the industry. Program Dates: June 1, 2026 to August 21, 2026. Position Purpose: We are seeking a motivated AI Engineering intern to support the development and implementation of an AI-powered deep research agent for This role offers hands-on experience with cutting-edge large language models, cloud infrastructure, and enterprise software development.

Requirements

  • Working towards a PhD degree and you have in AI/ML/Computer Science.
  • 3.8 Cumulative G.P.A required.
  • Strong communication skills, including written, verbal, listening, presentation and facilitation skills.
  • Demonstrated ability to build collaborative relationships.

Responsibilities

  • Understanding and working with commercial/proprietary LLMs such as Gemini( Google), GPT(OpenAI) and Claude Sonnet (Anthropic)for high performance, large context, and multimodal tasks.
  • Familiarity with open-source/self-hosted LLMs like Llama from Meta and Mixtral from (Mistral AI).
  • Requirements Gathering: Using Confluence for documentation and collaboration.
  • Architecture Design: Creating system diagrams and workflows with Lucidchart.
  • Prototyping: Designing UI/UX prototypes in Figma.
  • Project Management: Tracking tasks and progress in Jira.
  • Data Preparation & Management: Cleaning, transforming, and organizing data for use in AI/ML workflows.
  • Core LLM Frameworks: Using LangChain or LlamaIndex for orchestrating LLM applications.
  • Agent Frameworks: Building multi-agent systems with Semantic Kernel, CrewAI, and LangGraph.
  • Prompt Management: Managing and optimizing prompts with LangSmith.
  • Implementing semantic search and retrieval using Vertex AI Vector DBs
  • API Framework: Developing RESTful APIs with FastAPI (Python).
  • Message Queue: Integrating asynchronous communication with Apache Kafka and Redis Streams.
  • Web Framework: Building user interfaces with React or Angular.
  • UI Components: Utilizing Material-UI for consistent, modern UI elements.
  • IDE: Using Google AI Studio for AI application development.
  • IDE: Writing and debugging code in VS Code.
  • AI Assistants: Leveraging GitHub Copilot and Cursor for code suggestions and productivity.
  • Version ControlManaging code with GitHub, or GitLab.
  • Code Quality: Ensuring code quality and standards with SonarQube, ESLint, and Pylint.
  • Fine-tuning Platforms: Using Vertex AI Tuning for model customization.
  • Training Frameworks: Training and experimenting with models in PyTorch, TensorFlow, or JAX.
  • Efficient Training: Applying parameter-efficient fine-tuning (PEFT) methods like LoRA and QLoRA.
  • Synthetic Data: Generating synthetic data.
  • Evaluation: Assessing models with HELM, lm-evaluation-harness, and custom benchmarks.
  • LLM-Specific Testing: Using RAGAS, and DeepEval for LLM evaluation; LangSmith Evaluators for prompt testing; hallucination detection.
  • Containerization: Packaging applications with Docker.
  • Orchestration: Managing containers at scale with Kubernetes and Google GKE.
  • Using Google Cloud Platform (GCP) services such as Vertex AI for ML, GKE for Kubernetes, Cloud Run for serverless deployment, and Cloud Functions for event-driven tasks.
  • LLM Observability: Monitoring LLM performance and usage with LangSmith and Weights & Biases.
  • Cost Tracking: Monitoring and optimizing costs with OpenMeter and custom dashboards.
  • Quality Monitoring: Setting up continuous evaluation pipelines to ensure model quality and reliability.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service