BI Data Engineering Manager

McLaneTemple, TX
15dHybrid

About The Position

Moving America forward — together. We’ve been forging our path as a leader in the distribution industry since 1894. Building an expansive nationwide network of team members for 130+ years has allowed us to stay agile for our clients across the restaurant, retail, and e-commerce industries. We look to the future and are ready to continue making industry-defining moves by embracing the newest technology into our practices, continuing team member training, and emphasizing our people-centered culture. The BI Data Engineer Manager is a hybrid remote position which will require the candidate to report and work from the office four days a week. Therefore, interested candidates should be within a 50-minute radius from Temple, TX. Position Overview: Plan, organize, and manage the activities of the Data Engineering team to provide an efficient and cost-effective data infrastructure for McLane Company and provide maximize productivity using technology. We are looking for a Cloud Data Engineering Manager with hands-on experience in Cloud technologies like\: Azure, Azure Data Factory, Databricks, Microsoft Fabric, PySpark, Spark, SQL, Notebook, and Airflow Orchestration and DevOps CI/CD experience. Benefits you can count on: Day 1 Benefits\: medical, dental, and vision insurance, FSA/HSA, and company-paid life insurance Paid holidays, earn vacation time, and sick leave accrual from day one. 401(k) Profit Sharing Plan after 90 days. Additional benefits\: pet insurance, maternity/paternity leave, employee assistance programs, discount programs, tuition reimbursement program, and more!

Requirements

  • Bachelor’s degree in computer science, statistics, engineering, or a related field.
  • Three or more years of managerial experience, budget, and purchase execution responsibility.
  • Five or more years of broad-based experience in data analysis, conceptualizations, modeling, reporting, data governance working with structured and unstructured data.
  • Five or more years’ experience with designing and maintaining data warehouses and/or data lakes with big data technologies such as Spark/Databricks, or distributed databases, like Redshift and Snowflake, and experience with housing, accessing, and transforming data in a variety of relational databases.
  • Experience in building data pipelines and deploying/maintaining them following modern DE best practices (e.g., DBT, Airflow, Spark, Python OSS Data Ecosystem)
  • Knowledge of Software Engineering fundamentals and software development tooling (e.g., Git, CI/CD, JIRA) and familiarity with the Linux operating system and the Bash/Z shell
  • Experience with cloud database technologies (e.g., Azure) and developing solutions on cloud computing services and infrastructure in the data and analytics space.
  • Strong written and verbal communication skills
  • Understanding of machine learning, data science, computer vision, artificial intelligence, statistics, and/or applied mathematics as necessary to carry out role effectively.
  • Ability to work outside of normal business hours to support resolution of issues.
  • Ability to manage multiple projects simultaneously.

Responsibilities

  • Lead the evaluation, implementation and deployment of emerging tools and process for analytics data engineering to improve productivity and quality.
  • Maintain an infrastructure that provides flexibility and responsiveness, while supporting the productivity needs of the business using secure, stable and comprehensive industry standard technology, methods and tools.
  • Design, develop, optimize, and maintain data architecture and pipelines that adhere to ELT principles and business goals.
  • Solve complex data problems to delivers insights that helps business achieve its goals.
  • Create data products for engineer, analyst, and data scientist team members to accelerate their productivity.
  • Engineer effective features for modelling in close collaboration with data scientists and businesses
  • Develop and deliver communication and education plans on analytic data engineering capabilities, standards, and processes.
  • Provide up-to-date solutions that enable the business units to gain productivity and continue to become more competitive, including both on premise and cloud-based environments.
  • Implement, maintain, and validate Disaster Recover and Business Continuity procedures for storage and open system environments.
  • Provide Root Cause Analysis, including necessary remediation steps, for system issues resulting in unplanned business impact greater than 4 hours.
  • Perform other duties as assigned.

Benefits

  • medical, dental, and vision insurance, FSA/HSA, and company-paid life insurance
  • Paid holidays, earn vacation time, and sick leave accrual from day one.
  • 401(k) Profit Sharing Plan after 90 days
  • pet insurance
  • maternity/paternity leave
  • employee assistance programs
  • discount programs
  • tuition reimbursement program
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service