Senior Data Engineer

Capital Bank CareerRockville, MD
$115,000 - $130,000Remote

About The Position

As part of the Technology team, the Senior Data Engineer will act as a subject matter expert (SME) responsible for managing, designing, and optimizing enterprise data pipelines and database solutions that support all lines of business at Capital Bank, including our Commercial Bank, Consumer Card division, and Mortgage division. This role owns end-to-end delivery and operations across our primary technology stack (Azure, SQL Server, and Snowflake), ensuring scalable, secure, and high-performance data platforms, reliable integrations with enterprise systems, and strong data governance. The Senior Data Engineer will build and optimize ETL/ELT processes, data lake and data warehouse solutions, and data transfer/file processing workflows to enable trusted analytics and reporting across the bank.

Requirements

  • Bachelor’s degree or higher in Computer Science, Information Systems, or a related field.
  • 6+ years of experience in data engineering, ETL, database management with experience in cloud-based databases and financial services preferred.
  • Experience in Database Administration (DBA), managing and optimizing databases for performance and security.
  • 3+ years of experience designing and building data lakes and data warehouses using platforms like Azure Fabrik, Snowflake, Amazon Redshift, or Google BigQuery.
  • 2+ years of experience using data visualization tools like Power BI, Sisense, Google Looker, Tableau, or similar platforms.
  • Experience with managing data transfers and file processes, including SFTP, secure data pipelines, and real-time or batch data movement.
  • Experience in proactively identifying, addressing, and monitoring data quality issues, adhering to established data quality standards.
  • Excellent communication skills and the ability to collaborate effectively across teams and stakeholders, explain technical concepts to non-technical stakeholders.
  • Expertise in cloud-based database platforms, such as Azure SQL, Amazon RDS, Google Cloud Spanner, and Snowflake.
  • Strong knowledge of data lake and data warehouse architectures, including designing efficient schemas, partitioning strategies, and optimizing storage.
  • Proficiency with data integration tools and technologies, such as Apache Kafka, Apache Spark, Talend, or Informatica.
  • Hands-on experience building and maintaining ETL pipelines to support large-scale data environments.
  • Advanced SQL skills and familiarity with programming languages like Python or Java for data manipulation and automation.
  • Experience with data visualization platforms, including building and optimizing dashboards using Sisense, Power BI, Google Looker, Tableau, or similar tools.
  • Experience with CI/CD tools (e.g., GitLab, Azure DevOps, Jenkins) and data pipeline monitoring tools (e.g., Airflow, Apache NiFi, Azure Data Factory).
  • Strong understanding of database security best practices, including encryption, access controls, and compliance with regulatory standards.
  • Ability to manage data file transfers and processing workflows effectively.
  • Experience with database monitoring and performance tuning in cloud and hybrid environments.
  • Preferred experience in understanding data relationships, as well as developing and optimizing SQL queries, stored procedures, and database functions in both OLTP and OLAP systems
  • Strong organizational and problem-solving skills in Agile or fast-paced environments.

Responsibilities

  • Serve as the expert for ETL and DB solutions, collaborating with business stakeholders and IT teams to define requirements, gather data, and implement optimized data solutions.
  • Design, implement, and maintain data systems in Snowflake to ensure data scalability and accessibility.
  • Implement and manage data lakes and data warehouses, creating pipelines and data models to enable efficient analytics and reporting.
  • Establish and document strategies for managing data transfer processes, including secure file transfers (SFTP), batch data processing, and real-time streaming.
  • Build and optimize ETL pipelines for data extraction, transformation, and loading into operational databases or analytical platforms.
  • Integrate and support data visualization tools such as Power BI, Sisense, Google Looker, Tableau, or similar platforms to enable actionable insights for business stakeholders.
  • Develop and maintain optimized data models for dashboards and reporting, ensuring compatibility with visualization tools.
  • Plan, coordinate, and implement database migrations, upgrades, and patches with minimal downtime.
  • Define and enforce database governance policies, including data integrity, security, and compliance with regulatory requirements.
  • Analyze and resolve database performance issues by optimizing queries, indexes, and schema designs.
  • Partner with vendors to evaluate, select, and implement database tools, services, and technologies; stay informed about product roadmaps and industry trends.
  • Develop disaster recovery and high-availability solutions, including replication, clustering, and failover.

Benefits

  • Medical
  • Dental
  • Vision
  • Company Paid Life Insurance
  • Disability Insurance
  • Company Contributions to your 401k
  • Paid Parental Leave
  • Employee Recognition Program
  • Leadership Program
  • Tuition Reimbursement Program
  • Employee Bank Checking Account
  • Generous Paid Time Off
  • Paid Holidays
  • Paid Charity Hours
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service