Data Platform Manager- Hybrid in Pittsburgh, PA

A.C. CoyPittsburgh, PA
Hybrid

About The Position

The A.C.Coy company has an immediate opening for a Data Platform Manager. This role will be responsible for designing, building, and optimizing enterprise wide data platforms within the Data Warehouse.

Requirements

  • Bachelor's degree in Computer Science, Engineering, or a related field
  • 5-7+ years hands-on data engineering or architecture, with at least 2-4 years specifically focused on Azure Databricks, including Azure cloud technologies
  • Familiarity with data modeling and schema design principles
  • Proficiency in both Relational (SQL) and NoSQL (Document, Key-Value, Graph, Columnar) databases. Develop and maintain data models and schemas to support data analysis and reporting requirements
  • Knowledge of frameworks like Apache Hadoop, Spark, or Presto/Trino for optimizing and handling massive data volumes and retrieval mechanisms, ensuring the efficient processing of large datasets
  • Understanding file formats like Parquet, Avro, or ORC and compression techniques
  • Deep proficiency in programming languages: Python (specifically PySpark), SQL, PowerShell, and Scala
  • Hands-on experience with Azure Cloud infrastructure, including Networking (VNETs), Key Vault, and Identity Management
  • Deep knowledge of Apache Spark runtime internals, MLflow for MLOps, and orchestration tools like Airflow

Nice To Haves

  • 2-5 years experience is preferred in managing a team of data engineers, data scientists and/or analysts
  • Microsoft Certified: Azure Data Engineer Associate (DP-203), Databricks Certified Data Engineer Professional, or Azure Solutions Architect Expert
  • Expertise in indexing strategies, query optimization, execution plans, and partitioning/sharding

Responsibilities

  • Lead and mentor a team of data engineers, conducting code reviews and ensuring development standards
  • Support troubleshooting and incident management for data-related issues in production
  • Collaborate with business stakeholders, data scientists, and other team members to gather requirements and translate them into technical specifications
  • Lead the design, development and deployment of scalable and high-performance data pipelines using Azure Databricks; ensuring the data integrity, availability, efficient extraction, transformation, and loading of data from various sources into the Azure Databricks Data Warehouse
  • Collaborate with data scientists, analysts, and other engineering teams to deliver business-critical insights. Optimize pipeline performance, cost, and scalability in the Azure cloud environment
  • Define best practices for data ingestion, processing, storage, and governance. Implement data quality checks and validation procedures to ensure the accuracy and integrity of data between various sources, including API’s, databases and streaming platforms
  • Collaborate with data scientists and analysts to operationalize and deploy machine learning models
  • Define the end-to-end Lakehouse architecture using Delta Lake, implementing medallion architecture (Bronze, Silver, Gold layers) for robust data processing
  • Oversee the development of robust, scalable batch and streaming ETL/ELT pipelines using PySpark, Scala, and SQL and with minimal latency
  • Implement data transformations, enrichment, and quality checks using PySpark/Scala within the Databricks environment
  • Integrate real-time and batch data sources using Apache Kafka and ADF
  • Support large-scale data pipelines using Apache Spark on Databricks, Kafka, Stelo, and Azure Data Factory (ADF)
  • Implement Unity Catalog for unified governance, data security, fine-grained access control (RBAC), privacy measures, and data lineage tracking
  • Tune Spark jobs and Databricks clusters to maximize throughput while maintaining cost efficiency through auto-scaling and cluster policies
  • Orchestrate workflows by integrating Databricks with other Azure services like Azure Data Factory (ADF), Azure Data Lake Storage (ADLS Gen2), and Azure DevOps for CI/CD pipelines
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service