Data Engineer, Integrations

Global Partners LPWaltham, MA

About The Position

The Data Engineer, Data Platforms is a key contributor on our Data Team—with a focus on building and maintaining the robust data pipelines that underpin our data and analytics initiatives. You're not just managing data; you're supporting the platforms that enable our data-centric innovations across the organization. You will work with modern tools such as AWS, Snowflake, Dagster, and dbt, and utilize technologies like ECS, Docker, and CloudFormation to help maintain and evolve our data stack.Your hands-on experience in developing and monitoring data pipelines, combined with a passion for automation and continuous improvement, ensures that Global Partners remains at the forefront of data excellence. As you collaborate with cross-functional teams and adopt best practices, you will be instrumental in shaping our data environment. At the heart of it all, you're not just an engineer; you see the art in orchestrating and engineering data. As you engage with teams, provide strategic guidance, and champion the consistent adoption of best practices, you're also shaping the future of data analytics. If you're ignited by the prospect of being at the helm of technological evolution, where every decision melds strategy with data - Join us. Global Partners offers a collaborative team and an environment where we actively invest to create a culture of data driven excellence. At Global Partners, business starts with people. Since 1933, we’ve believed in taking care of our customers, our guests, our communities, and each other—and that belief continues to guide us. The Global Spirit is how we work to fuel that long term commitment to success. As a Fortune 500 company with 90+ years of experience, we’re proud to fuel communities—responsibly and sustainably. We show up every day with grit, passion, and purpose—anticipating needs, building lasting relationships, and creating shared value.

Requirements

  • Bachelor’s degree in Computer Science, Engineering, Mathematics, or a related field—or equivalent hands-on experience.
  • Approximately 3-5 years of relevant experience in Data Engineering, Software Engineering, or a related technical discipline.
  • Proficiency in Python for data-intensive applications, automation, and Strong SQL skills, and experience working with cloud data warehouses (e.g., Snowflake, BigQuery).
  • Familiarity with designing and implementing scalable, cloud-native (containerized) data platforms, including exposure to Infrastructure as Code (e.g., Terraform, CloudFormation) and container orchestration (Docker, ECS).
  • Working knowledge of CI/CD pipelines using tools such as GitHub Actions.
  • Exposure to modern data orchestration tools (e.g. Dagster) and transformation frameworks (e.g., dbt).
  • Basic understanding of data security, governance, and metadata management best practices in cloud environments.
  • Strong communication skills and a collaborative mindset, with a willingness to learn, share knowledge, and work within cross-functional teams.
  • Familiarity with Agile development methodologies and a metrics-first approach to problem solving.

Responsibilities

  • Develop and maintain scalable, cloud-native data pipelines that support all data engineering initiatives across the organization, using technologies such as AWS, Python, Docker, and Cloud Formation.
  • Assist in automating deployment (CI/CD) pipelines for data infrastructure and applications, leveraging tools like GitHub Actions to ensure reliable and timely deployments.
  • Implement Infrastructure as Code (IaC) practices using tools such as CDK or CloudFormation to help manage and version control cloud resources.
  • Build and monitor data orchestration workflows with modern tools like Dagster, ensuring efficient data processing and transformation.
  • Contribute to developing automated solutions, tech tools, infrastructure and self-service capabilities that simplify onboarding and configuration for developers working in our data environment.
  • Support the optimization of data storage and processing systems—including data lakes and data warehouses (e.g., Snowflake, SQL Server, S3, RAG Models)—to ensure both performance and cost-effectiveness.
  • Implement and maintain observability and monitoring solutions for data pipelines and infrastructure using tools like CloudTrail, LogicMonitor, Metaplane, or DataDog to uphold system reliability and performance.
  • Collaborate with peers in data engineering, data science, analytics engineering, and operations to share best practices and contribute to the broader adoption of DataOps methodologies.

Benefits

  • We offer competitive salaries and opportunities for growth.
  • We have an amazing Talent Development Team who create trainings for growth and job development.
  • Medical, Dental, Visions and Life Insurance. Along with additional wellness support.
  • We offer 401k and a match component!
  • We provide tuition reimbursement; this benefit is offered after 6 months of service.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service