About The Position

Nutrien is a leading provider of crop inputs and services, and our business results make a positive impact on the world. Our purpose, Feeding the Future, is the reason we come to work each day. We’re guided by our culture of care and our core values: safety, inclusion, integrity, and results. When we say we care, we mean it. We’re creating an inclusive workplace where everyone feels safe, has a sense of belonging, trusts one another, and acts with integrity. Through the collective expertise of our nearly 26,000 employees, we operate a world-class network of production, distribution, and ag retail facilities. We efficiently serve growers' needs and strive to provide a more profitable, sustainable, and secure future for all stakeholders. Help us raise the expectation of what an agriculture company can be and grow your career with Nutrien.

Requirements

  • Bachelor's Degree Required, preferably in Information Technology, Computer Science, or a related discipline; experience may be considered in lieu of education.
  • At least 3 years of hands-on experience with DevOps/DataOps practices.
  • Solid experience with AWS cloud services (IAM, RDS, DynamoDB, Redshift, Lake Formation, S3, Athena, Glue, Lambda, Event Bridge, Secrets Manager, KMS, CloudWatch, CloudTrail, ECS, etc.).
  • Proficient with GitHub (including GitHub Actions) and Docker.
  • Proficient with infrastructure as code tools (Terraform, CloudFormation, CDK).
  • Competent in coding/programming and working knowledge of two or more languages including one of Python, SQL, HCL or Shell scripting.
  • Working knowledge of observability tools (Prometheus, Grafana, etc.).
  • Experience working in a collaborative agile environment with a focus on high code quality and continuous integration.
  • Strong troubleshooting and problem-solving experience.

Nice To Haves

  • Familiarity with Jira, Confluence, Liquibase, Step Functions, Amazon SageMaker, Amazon Q/Kiro, Analytics and AI.

Responsibilities

  • Collaborate with the rest of the Data Platform team to support data platform and data governance for a dozen data product teams working in a Data Mesh Architecture on AWS using the Lakehouse Approach.
  • Participate in planning delivery, code quality, and process efficiency improvement projects.
  • Develop infrastructure-as-code to automate the deployment of AWS services.
  • Configure, maintain and document automated build pipelines to support CI/CD of infrastructure.
  • Implement and document data platform tools to provide coding standardization and automate processes for the organization.
  • Communicate and evangelize the Data Platform standards and best practices within the Digital organization.
  • Implement observability and SRE practices in AWS to ensure reliability and trustworthiness of data pipelines and analytics.
  • Support transformation of established infrastructure using automation.
  • Support building new systems environments, and upgrade/patch existing ones, through use of automation tooling.
  • Display passion and leadership around data, change and improvement.

Benefits

  • comprehensive medical, dental, vision coverage, and life insurance and well as disability coverage for positions working more than 30 hours per week
  • retirement program that encourages our employees to save for the longer term, with generous matching employer contributions
  • paid vacation, sick days and holidays as well as paid personal and maternity/parental leaves and an Employee and Family Assistance Program
  • annual incentive plan, consistent with the terms of our program(s) where discretionary pay out of awards is reflecting components such as performance of the company and the employee
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service