Manager-Data Platforms

Buchanan Ingersoll & Rooney PCPittsburgh, PA
Hybrid

About The Position

Buchanan Ingersoll & Rooney is a national law firm with a proven reputation for providing progressive, industry-leading legal, business, regulatory and government relations advice to our regional, national and international clients. We are searching for a Data Platform Manager for our corporate Pittsburgh, PA location. This role is for a senior technical leader who will be responsible for designing, building, and optimizing scalable enterprise data platforms on the Databricks Data Warehouse on the firm’s Azure Cloud platform. This position combines deep expertise in Databricks with broader knowledge of the Microsoft Azure ecosystem to drive and deliver high-performance data engineering initiatives, data analytics, and data science solutions. This role requires hands-on experience with Azure data services, including Azure Data Lake Storage, Azure SQL Database, Azure Data Factory, Azure Databricks, and Azure Synapse Data Warehouse. The ideal candidate will possess a strong foundation in cloud data platforms and streaming technologies, combined with a leadership mindset to mentor and guide teams in delivering high-quality solutions. Their role is critical in delivering scalable, robust data solutions that drive actionable insights and support decision-making.

Requirements

  • 5-7+ years hands-on data engineering or architecture, with at least 2-4 years specifically focused on Azure Databricks. And Azure cloud technologies.
  • Bachelor's degree in Computer Science, Engineering, or a related field.
  • Proficiency in both Relational (SQL) and NoSQL (Document, Key-Value, Graph, Columnar) databases. Develop and maintain data models and schemas to support data analysis and reporting requirements
  • Knowledge of frameworks like Apache Hadoop, Spark, or Presto/Trino for optimizing and handling massive data volumes and retrieval mechanisms, ensuring the efficient processing of large datasets.
  • Understanding file formats like Parquet, Avro, or ORC and compression techniques.
  • Deep proficiency in programming languages: Python (specifically PySpark), SQL, PowerShell, and Scala.
  • Hands-on experience with Azure Cloud infrastructure, including Networking (VNETs), Key Vault, and Identity Management. Stays updated with the latest Azure and enterprise cloud data technologies
  • Deep knowledge of Apache Spark runtime internals, MLflow for MLOps, and orchestration tools like Airflow.

Nice To Haves

  • 2-5 years experience is preferred in managing a team of data engineers, data scientists and/or analysts.
  • Microsoft Certified: Azure Data Engineer Associate (DP-203)
  • Databricks Certified Data Engineer Professional
  • Azure Solutions Architect Expert

Responsibilities

  • Lead and mentor a team of data engineers, conducting code reviews and ensuring development standards. Support troubleshooting and incident management for data-related issues in production.
  • Collaborate with business stakeholders, data scientists, and other team members to gather requirements and translate them into technical specifications.
  • Lead the design, development and deployment of scalable and high-performance data pipelines using Azure Databricks; ensuring the data integrity, availability, efficient extraction, transformation, and loading of data from various sources into the firm’s Azure Databricks Data Warehouse.
  • Collaborate with data scientists, analysts, and other engineering teams to deliver business-critical insights. Optimize pipeline performance, cost, and scalability in the Azure cloud environment.
  • Define best practices for data ingestion, processing, storage, and governance. Implement data quality checks and validation procedures to ensure the accuracy and integrity of data between various sources, including API’s, databases and streaming platforms
  • Collaborate with data scientists and analysts to operationalize and deploy machine learning models.
  • Define the end-to-end Lakehouse architecture using Delta Lake, implementing medallion architecture (Bronze, Silver, Gold layers) for robust data processing.
  • Familiarity with data modeling and schema design principles.
  • Oversee the development of robust, scalable batch and streaming ETL/ELT pipelines using PySpark, Scala, and SQL and with minimal latency
  • Implement data transformations, enrichment, and quality checks using PySpark/Scala within the Databricks environment.
  • Integrate real-time and batch data sources using Apache Kafka and ADF.
  • Support large-scale data pipelines using Apache Spark on Databricks, Kafka, Stelo, and Azure Data Factory (ADF)
  • Implement Unity Catalog for unified governance, data security, fine-grained access control (RBAC), privacy measures, and data lineage tracking.
  • Tune Spark jobs and Databricks clusters to maximize throughput while maintaining cost efficiency through auto-scaling and cluster policies.
  • Expertise in indexing strategies, query optimization, execution plans, and partitioning/sharding.
  • Orchestrate workflows by integrating Databricks with other Azure services like Azure Data Factory (ADF), Azure Data Lake Storage (ADLS Gen2), and Azure DevOps for CI/CD pipelines.

Benefits

  • Hybrid Schedule
  • Insurance – Medical, Dental, Vision
  • 401K Program
  • Retirement Savings Program
  • Generous Paid Time Off
  • Paid Holidays including a floating holiday
  • WorkWell wellness program
  • Free use of building gym
  • Caregiving assistance with Bright Horizons (child, elder, and pet care!)
  • Firm-wide emergency assistance fund
  • Free full access to LinkedIn Learning
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service