Senior Data Engineer

CredigyNorcross, GA
1d

About The Position

Credigy is seeking a Senior Data Engineer to join our Data and Automation team. This role is responsible for designing, building, and maintaining scalable, reliable data pipelines across both modern cloud platforms and legacy systems. You will own data pipelines end-to-end—from ingestion through consumption—while continuously improving reliability, performance, and scalability. This is a hands-on, senior-level role with a high degree of autonomy. The ideal candidate is comfortable operating in production environments, making architectural decisions, and modernizing existing data systems while continuing to support critical legacy processes. Founded in 2001, Credigy is a global specialty finance company with flexibility across the capital structure to acquire or finance a broad range of consumer assets. We are a wholly-owned subsidiary of National Bank of Canada (NBC) and our $9.1B+ portfolio represents 400+ deals and $32B+ in total investments life-to-date. We are the partner of choice when financial institutions face complex challenges and strategic changes. If you haven’t heard of us yet, we’re okay with that — we focus on serving our business partners, not making a name for ourselves. We are proud of our people-first company culture that has been recognized year-over-year as a Top Workplace both in Atlanta and nationally. What matters to you, matters to us so we go beyond the usual benefits to offer meaningful perks that support professional growth, personal connection, and a life outside the office. Early in the hiring process, we partner with you on our innovative, personalized flexible work program to maximize compatibility between your needs and the business from day one. Our priority is hiring top talent and helping you create a career you love. Credigy is a workplace that is free of discrimination and full of opportunity. We prioritize diversity, inclusion, and belonging, and we are dedicated to unbiased recruiting, hiring, and employment practices. Authenticity goes a long way at Credigy, and we get excited about the privilege of hiring people from diverse backgrounds. We are proud to be an Equal Opportunity Employer and commit to ensuring all applicants and employees are considered based on their qualifications and merit, without regard to race, ethnicity, religion, national origin, sex, sexual orientation, gender identity, age, veteran status, citizenship, disability, pregnancy, or any other status protected by law. We expect each employee to support this policy in our daily operations and we do not tolerate discriminatory practices or harassment in any form. No matter how you identify, or what background or industry you come from, we welcome you and feel honored you are considering opportunities at Credigy.

Requirements

  • 4+ years of experience as a Data Engineer, BI Developer, or ETL Developer.
  • Strong SQL expertise with MS SQL Server and T-SQL.
  • Hands-on experience with SSIS and Azure Data Factory (ADF).
  • Experience working with Snowflake and cloud-based data platforms.
  • Proven experience designing, building, and optimizing data pipelines and data architectures.
  • Experience supporting production data systems in a fast-paced environment.
  • Bachelor’s degree in Computer Science, Information Systems, Statistics, or a related field (or equivalent experience).

Nice To Haves

  • Experience with dbt for data transformation and modeling.
  • Experience using Python for data processing and integrations.
  • Experience with Informatica.
  • Experience implementing CI/CD for data pipelines.
  • Strong understanding of data modeling, data quality, and governance best practices.

Responsibilities

  • Design, build, and maintain end-to-end data pipelines, including ingestion, transformation, validation, orchestration, and delivery.
  • Develop and optimize data pipelines primarily using Snowflake, SQL Server, and Azure-based services.
  • Ensure data is available, secure, and consistently delivered through a well-defined and scalable data architecture.
  • Assemble large, complex datasets that meet functional and non-functional business requirements.
  • Own the reliability of production data pipelines, including monitoring, troubleshooting, and incident resolution.
  • Perform root cause analysis for data issues and implement durable fixes to prevent recurrence.
  • Implement and maintain data quality checks and validation frameworks to ensure accuracy and consistency.
  • Support multiple teams and systems while maintaining high availability and data integrity.
  • Identify and lead opportunities to optimize, automate, and modernize existing data processes and pipelines.
  • Maintain and enhance legacy ETL processes, while gradually re-architecting them toward more scalable, cloud-native solutions.
  • Drive improvements in performance, cost efficiency, and maintainability of the data platform.
  • Partner closely with Analytics, Business users, and Data scientist teams to understand data needs and translate them into effective technical solutions.
  • Communicate clearly with both technical and non-technical stakeholders, setting expectations and explaining trade-offs.
  • Contribute to data engineering standards, documentation, and best practices.
  • Use GitHub for version control, code reviews, and collaboration.
  • Contribute to and support CI/CD pipelines for data workflows to enable reliable, repeatable deployments.
  • Promote engineering best practices, including testing, documentation, and observability.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service