Technical Consultant

ExpressColumbus, OH

About The Position

PHOENIX Retail, LLC is a retail platform operating the Express and Bonobos brands worldwide. Express is a multichannel apparel brand dedicated to a design philosophy rooted in modern, confident and effortless style whether dressing for work, everyday or special occasions. Bonobos is a menswear brand known for being pioneers of exceptional fit and a personalized, innovative retail model. Customers can experience our brands in over 400 Express retail and Express Factory Outlet stores, 50 Bonobos Guideshops, and online at www.express.com and www.bonobos.com . About Express Express is a multichannel apparel brand dedicated to creating confidence and inspiring self-expression. Since its launch in 1980, the brand has embraced a design philosophy rooted in modern, confident and effortless style. Whether dressing for work, everyday or special occasions, Express ensures you look and feel your best, wherever life takes you. The Company operates over 400 retail and outlet stores in the United States and Puerto Rico, the express.com online store and the Express mobile app. The Technical Consultant is responsible for designing, building, and maintaining scalable data infrastructure and systems that support analytics, machine learning, and business intelligence initiatives. This role combines technical expertise with leadership, guiding a team of data engineers while collaborating cross-functionally to ensure data is reliable, accessible, and aligned with organizational goals. In this role, you will: Lead the design and evolution of enterprise-scale data architecture across GCP, BigQuery, Snowflake, PostgreSQL, and Databricks environments Build, optimize, and maintain scalable ELT pipelines using Python, dbt, Hevo, Spark, Informatica, and Fivetran. Define and implement data modeling standards to support analytics, reporting, and data science use cases Drive data quality, governance, and lineage initiatives to ensure trusted and compliant data assets Partner with cross-functional teams—including analytics, data science, product, and business stakeholders—to deliver impactful data solutions Establish best practices for DataOps, including CI/CD, testing, monitoring, and deployment of data pipelines Optimize performance and cost efficiency across large-scale data platforms and cloud infrastructure Lead incident response and root cause analysis for data pipeline and platform issues Translate business requirements into scalable, maintainable, and secure data engineering solutions

Requirements

  • Bachelor's Degree or Advanced/Master's Degree in Computer Science, Software Engineering, Economics, Statistics, Applied Math, or other quantitative disciplines.
  • 7-10 years of experience working with Python, SQL, PySpark, and bash scripts.
  • Proficient in software development lifecycle and software engineering practices.
  • 5+ years of experience developing and maintaining robust data pipelines for both structured and unstructured data for advanced analytical and reporting use cases.
  • 3+ years of experience working with Cloud Data Warehousing (Redshift, Snowflake, Databricks SQL, BigQuery or equivalent) platforms and distributed frameworks like Spark.
  • Hands-on experience with CI/CD tools (e.g., Jenkins or equivalent), version control (Github, Bitbucket), orchestration (Airflow, Prefect or equivalent).
  • Strong knowledge of data modeling, ETL/ELT design, and data warehousing methodologies.
  • Strong experience with programming languages in various combinations: SQL, Python
  • Hands-on experience with cloud computing, preferably Google Cloud Platform
  • Advanced expertise with Microsoft Office products.
  • Enterprise architecture and systems thinking
  • Leadership, mentoring, and team scaling
  • Cross-functional collaboration and stakeholder alignment
  • Strong problem-solving and decision-making skills
  • Focus on data reliability, performance, and governance

Responsibilities

  • Lead technical design and implementation of data engineering solutions, ensuring best practices and high-quality deliverables.
  • Mentor and guide junior engineers, conducting code reviews and technical sessions to foster team growth.
  • Perform detailed analysis of raw data sources by applying business context and collaborate with cross-functional teams to transform raw data into data products
  • Create scalable and trusted data pipelines which generate curated data assets in centralized data lake/data warehouse ecosystems.
  • Monitor and troubleshoot data pipeline performance, identifying and resolving bottlenecks and issues.
  • Create and maintain effective documentation for projects and practices, ensuring transparency and effective team communication.
  • Provide technical leadership and mentorship on continuous improvement in building reusable and scalable solutions.
  • Design, build, and enhance semantic layer content, including shared dbt models and reusable components that support scalable analytics.

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Number of Employees

5,001-10,000 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service