About The Position

Our Data & Analytics group is responsible for working with various business owners/stakeholders from Sales, Marketing, People, GCS, Infosec, Operations, and Finance to solve complex business problems which will have a direct impact on the metrics defined to showcase the progress of Palo Alto Networks. We leverage the latest technologies from the Cloud & Big Data ecosystem to improve business outcomes and create through prototyping, Proof-of-Concept projects and application development. We are looking for a Principal IT Data Engineer with extensive experience in Data engineering, SQL, Cloud engineering and business intelligence (BI) tools. The ideal candidate will be responsible for architecting, designing, implementing, and maintaining scalable data transformations and analytical solutions that support our business objectives. This role requires a strong understanding of data engineering principles, as well as the ability to collaborate with cross-functional teams to deliver high-quality data solutions.

Requirements

  • Minimum of 12 years of related experience with a Bachelor’s degree in Computer Science, Engineering, or a related field or 8 years and a Master’s degree; or a PhD with 5 years experience.
  • 5+ years of experience in data engineering, with a focus on building and maintaining data pipelines and analytical solutions.
  • Expertise in SQL programming and database management systems.
  • Hands-on experience with ETL tools and technologies such as Apache Spark and Apache Airflow.
  • Experience with cloud platforms (preferably GCP) and services like Dataflow, DataProc, BigQuery, and Cloud Composer.
  • Experience in AI Tools and good understanding of implementing AI in development lifecycle is mandatory
  • Demonstrated readiness to leverage GenAI tools to enhance efficiency within the typical stages of the data engineering lifecycle, for example by generating complex SQL queries, creating initial Python/Spark script structures, or auto-generating pipeline documentation, is a nice-to-have.

Nice To Haves

  • Experience with Big Data tools like Kafka.
  • Proficiency in object-oriented or object-functional scripting languages such as Python or Scala.
  • Experience with BI visualization platforms like Tableau.
  • Familiarity with SFDC Data Objects (e.g., Opportunity, Quote, Accounts).
  • Aptitude for leveraging GenAI tools to enhance efficiency in the data engineering lifecycle.

Responsibilities

  • Design, develop, and maintain data pipelines to extract, transform, and load (ETL) data from various sources into our data warehouse or data lake environment.
  • Aptitude for proactively identifying and implementing GenAI-driven solutions to achieve measurable improvements in the reliability and performance of data pipelines or to optimize key processes like data quality validation and root cause analysis for data issues, is a nice-to-have.
  • Collaborate with stakeholders to gather requirements and translate business needs into technical solutions.
  • Optimize and tune existing data pipelines and transformations for performance, reliability, and scalability.
  • Implement data quality and governance processes to ensure data accuracy, consistency, and compliance with regulatory standards.
  • Work closely with the BI team to design and develop dashboards, reports, and analytical tools that provide actionable insights to stakeholders.
  • Mentor junior members of the team and provide guidance on best practices for data engineering and BI development.
  • Drive new and improved processes to increase engineer productivity.
  • Good to have a trusted technical advisor / thought leader, identifying and integrating emerging technologies (e.g., AI/ML, agentic frameworks, AI-first architectures, cloud-native platforms) and guiding teams to adopt innovative best practices that ensure systems remain adaptable and future-ready.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service