Data Engineer

Red BullSanta Monica, CA
$112,000 - $168,000

About The Position

Red Bull North America is building the #1 CPG Data and Analytics team in the United States. We are looking for a Data Engineer to join our Enterprise Analytics & Data Engineering team - a small, high-ownership group serving Sales, Distribution, Operations, and Finance functions across the business. The Data Engineer will play a pivotal role in transforming raw data into reliable, analytics-ready products that people actually use to make decisions, building and maintaining the pipelines, Snowflake data models, and dbt-based transformation layers that serve as the backbone of our analytics and AI ecosystem. The ideal candidate will have hands-on experience with Snowflake, dbt, Dagster, and Python to develop, implement, and maintain robust data pipelines and analytical solutions. The engineer will interact directly with business stakeholders, transforming business requirements into technical solutions. Service mindedness, a white-glove-service approach, communication skills, and pro-activity are key skills required for the right candidate. WHAT SUCCESS LOOKS LIKE Pipelines run reliably with high data quality and minimal rework Transformation models are clean, tested, and documented to team standards AI-ready data layers are in place and accelerating intelligent analytics delivery on Snowflake Cortex Business teams receive accurate, well-documented data products without needing to re-open requirements

Requirements

  • 3+ years of experience in data engineering or analytics engineering
  • Bachelor's degree or higher in Computer Science, Information Systems, Data Engineering, or a related field.
  • Hands-on experience with a modern cloud data warehouse platform (e.g., Snowflake, Databricks, or equivalent): SQL, data modeling, and performance tuning
  • Working proficiency with a SQL-based data transformation framework (e.g., dbt or equivalent)
  • Experience with a workflow orchestration tool (e.g., Dagster, Airflow, Prefect, or equivalent)
  • Python proficiency: data manipulation, scripting, pipeline development, and Git-based version control
  • Experience with GitHub and CI/CD pipeline tooling for data asset deployment
  • Familiarity with cloud storage and compute services (e.g., AWS, Azure, or GCP)
  • Strong communication skills; proactive, collaborative, and service-minded
  • Demonstrated ability to work effectively with business users at all levels
  • High level of responsibility and accountability, with a commitment to delivering high-quality solutions
  • Willingness to travel as needed for onboarding and collaboration with global teams
  • Fluent in English; additional language skills an advantage.

Nice To Haves

  • Experience with agentic AI frameworks like Snowflake Cortex or equivalent is a strong plus

Responsibilities

  • Design, build, and maintain data pipelines using modern orchestration tools (e.g., Dagster, Airflow, or equivalent)
  • Develop and optimize Snowflake data models — including dynamic tables, streams, tasks, and materialized views — for performance and reliability
  • Ingest and process structured and semi-structured data (CSV, JSON, Parquet) via automated ELT workflows
  • Write Python for data manipulation, automation, and pipeline development — following engineering best practices including testing, documentation, and code optimization
  • Manage version control and collaboration through GitHub, adhering to branching strategies and code review standards
  • Build and maintain CI/CD pipelines to automate testing, validation, and deployment of data assets
  • Contribute to data lake design and maintenance, ensuring data integrity, lineage, and quality standards
  • Build clean, AI-ready data layers that support agentic analytics and intelligent querying use cases on Snowflake Cortex
  • Contribute to semantic layer development alongside senior engineers, supporting clean, consistent data access patterns for AI and analytics consumers
  • Support the team's work in AI for analytics on Snowflake Cortex — executing on agent-driven workflows and automated insight pipelines under the guidance of senior engineers
  • Monitor and troubleshoot pipelines to ensure uptime, data quality, and SLA compliance
  • Implement testing frameworks within your transformation layer to validate accuracy and catch issues early
  • Identify opportunities to optimize pipeline performance, reduce latency, and lower compute cost
  • Partner with business analysts and Sales, Distribution, Operations, and Finance teams to translate requirements into technical solutions
  • Engage business stakeholders with a service-first mindset — proactively communicating, setting clear expectations, and following through
  • Document pipeline designs, data flows, and technical decisions to support team knowledge and auditability
  • Build relationships with global data engineering teams to align on standards and shared solutions
  • Contribute to the Analytics roadmap for short, medium, and long-term business needs
  • Innovate and enhance our data lakes and data fabric, ensuring alignment with business goals
  • Stay current with industry trends and emerging technologies, particularly in the Snowflake ecosystem and AI-driven analytics
  • Own your work end-to-end — manage priorities, track commitments in Jira, and don't wait to be asked
  • Collaborate openly across engineering, analytics, and business teams in a high-trust, low-bureaucracy environment
  • Bring a white-glove mindset to business stakeholders — responsive, clear, and solutions-oriented

Benefits

  • Comprehensive Medical, Dental and Vision Plans
  • 401k Match
  • Family Leave
  • PTO & Paid Holiday Schedule
  • Pet, Legal, and Life Insurance
  • Tuition Reimbursement
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service