Data Engineering Intern

Serv Recruitment Agency
Hybrid

About The Position

Serv, a global executive recruitment partner, is hiring on behalf of our client Mercator.ai for a Data Engineering Intern. Mercator.ai is transforming how business development teams identify and win opportunities in the construction industry. Their platform surfaces early signals on where construction is happening and who is involved, helping clients act faster and smarter. Behind the scenes, Mercator.ai processes and enriches millions of complex records to deliver powerful, actionable intelligence. They are hiring a Data Engineering Intern to support the growth and scalability of their data platform. As the Data Engineering Intern, you will work closely with the Data Quality and Engineering teams to help build, improve, and maintain data pipelines that power the Mercator.ai platform. You will gain hands-on experience with real production systems, messy real-world datasets, and scalable data workflows. This is an ideal opportunity for a curious and motivated student who wants practical experience in data engineering, automation, and startup execution. You will contribute meaningful work while learning from experienced engineers in a collaborative environment.

Requirements

  • Currently enrolled in a university co-op program in Computer Science, Software Engineering, or a related field
  • At least one prior internship or co-op term in a technical role
  • Familiarity with ETL concepts and tools
  • Experience writing basic Python scripts and SQL queries
  • Strong attention to detail and willingness to learn in a collaborative environment
  • Strong communication skills and ability to ask questions, receive feedback, and work with mentors

Nice To Haves

  • Exposure to data engineering concepts such as data modeling, orchestration tools, or transformation tools
  • Familiarity with cloud platforms such as Google Cloud Platform or AWS
  • Experience with Pandas or other Python data libraries
  • Exposure to web scraping tools such as Playwright, Beautiful Soup, or Requests
  • Familiarity with validation frameworks such as Pydantic
  • Experience working with LLM prompt workflows such as Gemini or ChatGPT
  • Comfort using Git and version control workflows
  • Interest in construction data, geospatial data, or startup environments

Responsibilities

  • Collaborate with the data engineering team to build and improve pipelines that extract, clean, enrich, and deliver data from multiple sources
  • Support the development of data ingestion and ETL pipelines for new datasets
  • Assist in maintaining and monitoring workflows to ensure data accuracy, timeliness, and reliability
  • Help troubleshoot data quality, transformation, or integration issues across pipelines
  • Contribute to automation initiatives that improve scale and operational efficiency
  • Learn and apply best practices in testing, logging, and error handling
  • Assist with documentation for tools, systems, and engineering standards
  • Work with mentors and team members to continuously improve technical skills and execution

Benefits

  • Competitive internship compensation based on program structure and experience
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service