Data Engineer (AI)

Customers BankMalvern, PA
2d

About The Position

At Customers Bank, we believe in working hard, working smart, working together to deliver memorable customer experiences and having fun. Our vision, mission, and values guide us along our path to achieve excellence. Passion, attitude, creativity, integrity, alignment, and execution are cornerstones of our behaviors. They define who we are as an organization and as individuals. Everyone is encouraged to have personal development plans. By doing so, our team members are on their way to achieve their highest potential and be successful in their personal and professional lives. Must be legally eligible to work in the United States without sponsorship, now or in the future, to be considered. We’re looking for builders who want to be early — not in a startup with no guardrails, and not in a legacy organization where change is slow, but in the rare middle ground where innovation actually ships. This team has a clear mandate and strong executive air cover: AI and automation were called out as a strategic priority on our most recent earnings call and are top of mind for the CEO and executive team, with direct support from senior leadership. The opportunity is ownership — helping reimagine how banking gets done by blending legacy and modern technologies, building durable capabilities from the ground up, and contributing to a model for what a 21st-century bank can look like. Who is Customers Bank? Founded in 2009, Customers Bank is a super-community bank with over $22 billion in assets. We believe in dedicated personal service for the businesses, professionals, individuals, and families we work with. We get you further, faster. Focused on you: We provide every customer with a single point of contact. A dedicated team member who’s committed to meeting your needs today and tomorrow. On the leading edge: We’re innovating with the latest tools and technology so we can react to market conditions quicker and help you get ahead. Proven reliability: We always ground our innovation in our deep experience and strong financial foundation, so we’re a partner you can trust. What You’ll Do: The Data Engineer will play a foundational role in building Customers Bank’s modern data ecosystem, enabling enterprise automation, analytics, and AI initiatives. This is an opportunity to join at the ground floor of a growing data capability and help define how data is ingested, transformed, governed, and delivered across the organization. Working as part of a cross-functional delta team, the Data Engineer will partner closely with RPA developers, analytics teams, and business stakeholders to support ETL/ELT pipelines, data integrations, and data acquisition efforts across operational, financial, and third-party systems. The role emphasizes both hands-on engineering delivery and thoughtful data design, with a strong focus on quality, reliability, and scalability. This position sits within the data and strategy organization and contributes directly to high-visibility initiatives tied to operational efficiency, automation, and AI enablement.

Requirements

  • 5+ years of experience in data engineering, data integration, or ETL development, ideally in financial services, consulting, or complex enterprise environments.
  • 5+ years of expert-level SQL proficiency, including CTE’s, window functions, performance tuning, and data modeling
  • 1+ years of experience integrating with REST and/or SOAP APIs.
  • Strong problem-solving skills and the ability to work across ambiguous, evolving requirements.
  • Experience collaborating closely with technical and non-technical stakeholders.
  • Solid understanding of the Software Development Life Cycle (SDLC) and experience working in Agile delivery environments.
  • Bachelor’s degree in computer science, Information Systems, Engineering, or a related field (or equivalent practical experience).
  • Advanced SQL expertise across relational and analytical databases.
  • Hands-on experience with cloud data platforms Snowflake and Azure’s Fabric
  • Experience building modern pipelines using tools such as Azure Data Factory, dbt, Microsoft Fabric, or equivalent ELT/ETL technologies.
  • Proficiency in Python (or similar languages such as Java, R, or C#) for data processing, transformation, and integration.
  • Experience working with SQL and NoSQL databases (e.g., SQL Server, Oracle, PostgreSQL, MySQL, Aurora, BigQuery, Presto).
  • Exposure to Azure data services (e.g., Azure SQL, Databricks, MLflow) or equivalent cloud-native tooling.
  • Familiarity with CI/CD and DevOps practices, including exposure to Azure DevOps (ADO) or similar platforms.
  • Understanding of data quality practices, logging, monitoring, and operational support.
  • Collaborative and consultative, with a strong partnership mindset.
  • Detail-oriented with a strong sense of ownership and accountability.
  • Able to explain data concepts and trade-offs to non-technical audiences.
  • Comfortable balancing speed of delivery with data quality and long-term sustainability.
  • Curious and proactive, with a desire to continuously improve data capabilities.

Nice To Haves

  • Advanced degree or relevant certifications preferred.
  • Experience with Talend and familiarity with Power BI for downstream consumption is preferred (major plus)
  • Exposure to event-driven architectures, streaming data, or AI/ML data enablement.

Responsibilities

  • Design, build, and maintain ETL/ELT pipelines that ingest, transform, and integrate data from a wide variety of source systems.
  • Support data gathering and integration efforts across core operational, financial, and third-party platforms using REST and SOAP APIs, files, and database connections.
  • Enable downstream consumption of data by RPA automations, analytics dashboards, reporting, and AI/ML solutions.
  • Work with structured and semi-structured data supporting batch, scheduled, event-driven, and near-real-time use cases.
  • Partner closely with business SMEs, automation developers, and analysts to understand data requirements and translate them into scalable, production-ready engineering solutions.
  • Implement data quality checks, validation rules, and monitoring to ensure accuracy, consistency, and reliability.
  • Contributes to the definition of data engineering standards, patterns, and best practices suitable for enterprise scale.
  • Document data pipelines, data flows, and assumptions to support transparency, maintainability, and governance.
  • Troubleshoot data issues, optimize pipeline performance, and continuously improve reliability and efficiency.
  • Support data governance, security, and access controls in alignment with enterprise and regulatory requirements.
  • Stay current on modern data engineering tools, cloud data platforms, and industry best practices.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service