Senior Data Engineer/Architect

TextronTowson, MD
3h

About The Position

We are seeking an accomplished Senior Data Architect/Senior Data Engineer with deep expertise in architecting and delivering modern data platforms using Azure Databricks, Delta Lake, and Medallion Architecture. The ideal candidate has strong experience in data warehousing, lakehouse best practices, ACID transaction processing, and designing scalable data ecosystems that support advanced analytics, AI, and machine learning workloads.

Requirements

  • Bachelor’s or Master’s degree in Computer Science, Information Technology, or related field.
  • 5+ years of experience in data engineering, data architecture, or enterprise analytics platform development.
  • Extensive hands-on experience with Azure Databricks, including building and managing data pipelines.
  • Strong expertise in Medallion Architecture development, including the creation and optimization of Bronze, Silver, and Gold layers.
  • Proficiency in SQL, Python, PySpark, and other data processing/ETL languages and frameworks.
  • Experience with Azure Data Lake Storage, Azure Data Factory, and other related Azure services.
  • Familiarity with data warehousing concepts and technologies such as Azure Synapse Analytics.
  • Knowledge of data governance, security, and compliance best practices.
  • Excellent problem-solving skills and the ability to work independently and in a team environment.
  • Strong communication skills, with the ability to articulate complex technical concepts to non-technical stakeholders.
  • Experience with CI/CD and DevOps practices (DevOps, GitHub Actions, Terraform).

Nice To Haves

  • Experience with Oracle EBS data structures preferred but not required.
  • Certification in Azure Data Engineering or Azure Solutions Architecture is a plus.

Responsibilities

  • Design, develop, and implement large-scale data processing systems and solutions using Azure Databricks.
  • Architect and build Medallion Architecture layers (Bronze, Silver, Gold) to ensure efficient data pipeline processing from raw data to cleaned and enriched datasets.
  • Build ACID-compliant pipelines leveraging Delta Lake features such as transaction logs, schema enforcement, time travel, and versioning.
  • Develop and optimize ELT (Extract, Load, Transact) processes using Databricks and other Azure services to support data warehousing, analytics, and reporting requirements.
  • Create data architectures that support AI and machine learning cases, including feature engineering pipelines, large-scale data preprocessing, and high-performance data retrieval for model training inference.
  • Apply data modeling best practices including Kimball vs Inmon methodologies, 3NF normalization, denormalization, dimensional modeling, and hybrid lakehouse patterns.
  • Architect solutions that support both OLTP and OLAP workloads with appropriate storage, compute, and performance strategies.
  • Develop logical and physical data models that scale for BI, analytics, predictive modeling, and ML training pipelines.
  • Apply principles such as normalization/denormalization, partitioning, indexing, constraints, and schema design to enable optimized data processing.
  • Conduct query tuning using execution plans, join optimization, Spark performance tuning, caching strategies, and workload segregation.
  • Review, optimize, and design complex transformations, including nested functions, window functions, CTEs, and ML feature creation logic.
  • Implement end-to-end data governance including metadata management, data quality, lineage, and documentation.
  • Apply security best practices including schema-based security, ACLs, role-based access control, and integration with Azure AD and Key Vault.
  • Ensure reliability, auditability, and recoverability across all layers of the lakehouse platform.
  • Monitor data pipelines and implement error handling, logging, and alerting mechanisms to ensure data quality and reliability.
  • Partner with data science, analytics, business, and engineering teams to deliver high-quality datasets for BI, ML, and AI solutions.
  • Mentor and provide technical guidance to junior data engineers and team members.
  • Perform code reviews and ensure adherence to coding standards and best practices.
  • Provide documentation and training for end-users and team members to ensure seamless adoption and usage of data solutions.

Benefits

  • Flexible Work Schedules
  • Education Assistance
  • Career Development & Training Opportunities
  • Wellness Program (including Fitness Reimbursement)
  • Medical, Dental, Vision & 401(K) with Company Funding
  • Paid Parental Leave
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service