Staff Data Engineer

OptimumNew York, NY
$182,000 - $192,000

About The Position

We are Optimum, a leader in the fast-paced world of connectivity, and we're on the hunt for enthusiastic professionals to join our team! We understand that connectivity isn't just a luxury anymore – it's a necessity that empowers lives, fuels businesses, and drives innovation. A career at Optimum means you'll be enabling progress and enhancing lives by providing reliable, high-speed connectivity solutions that keep the world connected. We owe our success to our amazing product, commitment to our people and the connections we make in every community. If you are resourceful, collaborative, team-oriented and passionate about delivering consistent excellence, Optimum is the Company for you! We are Optimum!Job SummaryCSC Holdings, LLC seeks a Staff Data Engineer to architect, build, and maintain scalable, fault-tolerant data pipelines for the continuous ingestion, transformation, and loading of large datasets across distributed systems.

Requirements

  • Position requires a Bachelor’s degree in Computer Science, Data Science, Engineering, or a related field followed by 5 years of progressively responsible experience with implementing data validation and data integrity processes.
  • 5 years of experience with writing complex SQL queries, including joins, unions, subqueries, and window functions
  • 5 years of experience with version control systems for collaboration and version tracking of SQL scripts and pipelines
  • 5 years of experience with cloud infrastructure, including AWS, Snowflake, Google BigQuery, Google Cloud, and Azure for database and data pipeline processes
  • 5 years of experience with automating data pipelines with Airflow, Apache Kafka, and Spark
  • 3 years of experience with Python programming data orchestration and automation

Responsibilities

  • Leverage data technologies to support real-time and batch data processing workflows.
  • Implement workflow orchestration frameworks to automate ETL jobs, ensuring data freshness, error handling, and optimized resource allocation.
  • Perform scheduling, dependency tracking, and monitoring to reduce manual intervention and minimize system downtime.
  • Define data architecture to support structured and unstructured data.
  • Utilize dimensional modeling to enable efficient querying in data warehouses.
  • Leverage indexing, partitioning, and sharing strategies to handle high-throughput and low-latency data operations.
  • Utilize BigQuery and BigTable for scalable data lakes and adaptive query execution.
  • Drive strategic initiatives from concept to delivery.
  • Lead cross-functional data engineering projects and mentor team members.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service