Foxposted about 2 months ago
$160,000 - $213,000/Yr
Full-time • Senior
Los Angeles, CA
Broadcasting and Content Providers

About the position

The FOX Data Platform Team is looking for a Principal Engineer. This is a great opportunity to join a data-first media company and be part of our Enterprise Data Platform team that prides itself in making Fox a data-driven organization. The successful candidate will have a background in serving as a senior technical expert responsible for the design, implementation, and operational excellence of the company's data engineering solutions playing a pivotal role in driving the technical vision and strategy for data engineering across the organization. If you are passionate about data, you think out of the box, bring new ideas and find creative ways for solving complex business problems with data insights, this role is for you!

Responsibilities

  • Be an expert in the Data Engineering space, build and mentor a community of practice for Data Engineering, fostering an environment of continuous learning and development
  • Lead the design and architecture of robust, scalable, and efficient data solutions to handle large scale data processing and analytics
  • Design and develop scalable and reliable data pipelines to support analytics, reporting, and AI use cases
  • Lead by example by helping team members in solving complex technical problems through hands-on solutioning, providing technical direction and driving the team towards technical expertise
  • Drive innovation through research on emerging technologies and tools in the data engineering space
  • Collaborate with business leaders, product managers, data analysts, data scientists, and engineers to understand business needs and translate them into actionable data engineering plans
  • Take data insights to the next level by implementing generative AI use cases
  • Explore, analyze and understand the data from internal and external sources, outline designs to integrate, centralize and unify data
  • Develop and maintain an enterprise-wide data model and metadata strategy
  • Create reference designs, standards & best practices to ensure engineering teams build distributed, low latency, reliable data pipelines
  • Conduct a continuous audit of data management system performance, refine and optimize whenever required
  • Perform root cause analysis on internal and external data processes to answer specific business questions and identify opportunities for improvement
  • Implement data governance and compliance protocols in line with industry and security standards
  • Develop and maintain detailed documentation for systems, data standards, and procedures
  • Promote a culture of engineering excellence by defining standards and best practices and conduct periodic reviews
  • Partner with Infrastructure team in creating effective and efficient CI/CD Pipelines
  • Advance a culture of operational excellence through continuous process improvements, performance tuning, disaster recovery, and high availability strategies
  • Communicate complex technical concepts and solutions in a clear and effective manner to a variety of stakeholders

Requirements

  • Bachelor's or Master's degree in Computer Science, Data Science, Engineering, or a related field
  • Experience in an expert level data engineering role with hands-on experience in building large scale data processing systems
  • In-depth knowledge of Data management concepts, data modeling techniques and methodologies in an all-cloud ecosystem dealing with Big Data
  • Strong experience in designing complex, distributed systems with a focus on reliability and scalability
  • Good understanding of Data Science, generative AI use cases and LLMs
  • Proven ability to drive research and technology adoption with measurable business impact
  • Expertise in creating end to end data flow architectures and scalable/flexible data models
  • Strong understanding of modern data platforms and designs - data warehouse, data lake, lake house, data mesh, real-time streaming etc.
  • Experience working with modern data engineering tech stack and platforms (Databricks, Snowflake, AWS Services, DBT)
  • Experience with distributed data architectures and big data processing technologies (Spark, EMR, Hadoop)
  • Experience working with real-time data streams processing and ingestion frameworks (Apache Kafka, Kinesis, Flink or Spark Structured Streaming)
  • Proficiency in Python, Pyspark
  • Expertise in Advanced SQL programming and Performance Tuning
  • Experience with CI/CD using Github Actions or Jenkins
  • Proficiency in SQL development and database administration
  • Experience working with structured, semi-structured and unstructured data
  • Confident in decision making and the ability to explain processes or choices as needed
  • Excellent multitasking skills and task management strategies
  • Strong strategic thinking, analytics skills and business acumen
  • Sees the big picture and is fully aware of technology and business directions
  • Optimizes the use of all available resources
  • Effectively manage competing priorities and promote agile delivery
  • Strong ability to analyze and interpret complex datasets
  • Excellent problem-solving skills and attention to detail
  • Exceptional leadership skills with a track record of mentoring top-tier data engineers
  • Superior communication and interpersonal abilities

Nice-to-haves

  • Experience in developing and implementing generative AI models and algorithms
  • Familiarity with natural language processing (NLP) and computer vision for generative AI applications
  • Experience in building and deploying generative AI systems in real-world applications

Benefits

  • Medical, dental, and vision insurance
  • 401(k) plan
  • Paid time off
  • Annual discretionary bonus
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service