Interpretable AI Research Engineer

EquifaxAtlanta, GA
1dHybrid

About The Position

Equifax is where you can power your possible. If you want to achieve your true potential, chart new paths, develop new skills, collaborate with bright minds, and make a meaningful impact, we want to hear from you. What you will do Develop methods to map latent embeddings to human concepts, with the explicit goal of labeling embedding dimensions/features as interpretable text descriptions relevant to credit risk modeling. Design and execute mechanistic interpretability research on Transformer architectures (e.g., probing representations, causal interventions, and internal component analysis) to understand how credit risk signals are encoded. Build an end-to-end interpretability workflow: experimental design, implementation, evaluation, and clear documentation of findings and limitations. Engineer research code into durable tooling: refactor experimental notebooks/prototypes into clean, modular, testable Python code that supports iteration and reuse. Collaborate with credit risk domain experts to ensure interpretability outputs are meaningful, actionable, and grounded in domain reality. Partner with internal MLOps / ML engineers to run large-scale training and integrate research tooling with GPU/cloud execution environments—while keeping momentum when support is intermittent. Contribute to technical strategy: propose experiments, define success criteria, and help de-risk the approach through principled iteration and evidence. What experience you need Education & Experience: A PhD in a quantitative discipline is highly preferred (with a minimum of 5+ years of relevant professional/research experience) OR an MS with 7+ years of exceptional, specialized eXplainable AI (XAI) experience is acceptable. A strong, demonstrable background in interpretability, representation analysis, or research/tooling focused on Transformer models. Strong mathematical and statistical foundations (linear algebra, statistics) sufficient to implement/interpret representation analysis methods (the work references advanced techniques such as Kernel CCA). Experience with LLMs and Transformer development workflows (e.g., common training/evaluation patterns). Communication & documentation strength: ability to clearly explain complex findings to technical stakeholders and produce crisp experimental notes and interpretability artifacts.

Requirements

  • A PhD in a quantitative discipline is highly preferred (with a minimum of 5+ years of relevant professional/research experience) OR an MS with 7+ years of exceptional, specialized eXplainable AI (XAI) experience is acceptable.
  • A strong, demonstrable background in interpretability, representation analysis, or research/tooling focused on Transformer models.
  • Strong mathematical and statistical foundations (linear algebra, statistics) sufficient to implement/interpret representation analysis methods (the work references advanced techniques such as Kernel CCA).
  • Experience with LLMs and Transformer development workflows (e.g., common training/evaluation patterns).
  • Communication & documentation strength: ability to clearly explain complex findings to technical stakeholders and produce crisp experimental notes and interpretability artifacts.

Nice To Haves

  • Strong research engineering skills in Python with demonstrated ability to implement and iterate quickly while maintaining code quality and reproducibility.
  • Expertise with modern deep learning frameworks (TensorFlow strongly preferred / PyTorch acceptable)
  • Hands-on experience in Mechanistic Interpretability / XAI for Transformers, including the ability to run controlled experiments that isolate and explain internal model behavior (e.g., activation-based interventions and interpretability analysis patterns).
  • ML engineering experience running Transformers on cloud GPU infrastructure, including practical understanding of training large models with NVIDIA GPUs and troubleshooting training/runtime issues.
  • Familiarity with Hugging Face transformers/datasets ecosystems and applied workflows for training/fine-tuning and data iteration.
  • Exposure to high-volume data handling for time-series or sequential modeling (e.g., efficient data loading strategies; pipeline integration patterns), especially when datasets are large and irregular.
  • Experience applying interpretability techniques to regulated or high-stakes domains (finance, healthcare, compliance-heavy environments), where explanation quality and defensibility matter.
  • Evidence of impact via publications, open-source contributions, or internal tooling related to interpretability, representation learning, or Transformer analysis.

Responsibilities

  • Develop methods to map latent embeddings to human concepts, with the explicit goal of labeling embedding dimensions/features as interpretable text descriptions relevant to credit risk modeling.
  • Design and execute mechanistic interpretability research on Transformer architectures (e.g., probing representations, causal interventions, and internal component analysis) to understand how credit risk signals are encoded.
  • Build an end-to-end interpretability workflow: experimental design, implementation, evaluation, and clear documentation of findings and limitations.
  • Engineer research code into durable tooling: refactor experimental notebooks/prototypes into clean, modular, testable Python code that supports iteration and reuse.
  • Collaborate with credit risk domain experts to ensure interpretability outputs are meaningful, actionable, and grounded in domain reality.
  • Partner with internal MLOps / ML engineers to run large-scale training and integrate research tooling with GPU/cloud execution environments—while keeping momentum when support is intermittent.
  • Contribute to technical strategy: propose experiments, define success criteria, and help de-risk the approach through principled iteration and evidence.

Benefits

  • comprehensive compensation and healthcare packages
  • 401k matching
  • paid time off
  • organizational growth potential through our online learning platform with guided career tracks

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Education Level

Ph.D. or professional degree

Number of Employees

5,001-10,000 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service