Applied Researcher I

Capital OneSan Jose, CA
$218,700 - $272,300Onsite

About The Position

At Capital One, the Applied Researcher I will contribute to creating trustworthy and reliable AI systems, aiming to change banking for good. Capital One has been a leader in using machine learning to develop real-time, intelligent, and automated customer experiences, bringing humanity and simplicity to banking through AI & ML applications. The company is dedicated to building world-class applied science and engineering teams, continuously enhancing its capabilities with breakthrough product experiences and scalable, high-performance AI infrastructure. In this role, you will help leverage the transformative power of emerging AI capabilities to reimagine how customers and businesses interact with Capital One's products and services. The AI Foundations team, where this role is situated, is central to realizing Capital One's vision for AI, covering the entire research life cycle from academic partnerships to building production systems. The team collaborates with product, technology, and business leaders to apply state-of-the-art AI to the business.

Requirements

  • Currently has, or is in the process of obtaining, a PhD in Electrical Engineering, Computer Engineering, Computer Science, AI, Mathematics, or related fields, with an exception that required degree will be obtained on or before the scheduled start date
  • M.S. in Electrical Engineering, Computer Engineering, Computer Science, AI, Mathematics, or related fields plus 2 years of experience in Applied Research
  • Innovative: continually research and evaluate emerging technologies, stay current on published state-of-the-art methods, technologies, and applications and seek out opportunities to apply them.
  • Creative: thrive on bringing definition to big, undefined problems, love asking questions and pushing hard to find answers, not afraid to share a new idea.
  • A leader: challenge conventional thinking and work with stakeholders to identify and improve the status quo, passionate about talent development for your own team and beyond.
  • Technical: comfortable with open-source languages and passionate about developing further, hands-on experience developing AI foundation models and solutions using open-source tools and cloud computing platforms.
  • Has a deep understanding of the foundations of AI methodologies.
  • Experience building large deep learning models, whether on language, images, events, or graphs, as well as expertise in one or more of the following: training optimization, self-supervised learning, robustness, explainability, RLHF.
  • An engineering mindset as shown by a track record of delivering models at scale both in terms of training data and inference volumes.
  • Experience in delivering libraries, platform level code or solution level code to existing products.
  • A professional with a track record of coming up with high quality ideas or improving upon existing ideas in machine learning, demonstrated by accomplishments such as first author publications or projects.
  • Possess the ability to own and pursue a research agenda, including choosing impactful research problems and autonomously carrying out long-running projects.

Nice To Haves

  • PhD in Computer Science, Machine Learning, Computer Engineering, Applied Mathematics, Electrical Engineering or related fields
  • LLM PhD focus on NLP or Masters with 5 years of industrial NLP research experience
  • Multiple publications on topics related to the pre-training of large language models (e.g. technical reports of pre-trained LLMs, SSL techniques, model pre-training optimization)
  • Member of team that has trained a large language model from scratch (10B + parameters, 500B+ tokens)
  • Publications in deep learning theory
  • Publications at ACL, NAACL and EMNLP, Neurips, ICML or ICLR
  • Optimization (Training & Inference) PhD focused on topics related to optimizing training of very large deep learning models
  • Multiple years of experience and/or publications on one of the following topics: Model Sparsification, Quantization, Training Parallelism/Partitioning Design, Gradient Checkpointing, Model Compression
  • Experience optimizing training for a 10B+ model
  • Deep knowledge of deep learning algorithmic and/or optimizer design
  • Experience with compiler design
  • Finetuning PhD focused on topics related to guiding LLMs with further tasks (Supervised Finetuning, Instruction-Tuning, Dialogue-Finetuning, Parameter Tuning)
  • Demonstrated knowledge of principles of transfer learning, model adaptation and model guidance
  • Experience deploying a fine-tuned large language model

Responsibilities

  • Partner with a cross-functional team of data scientists, software engineers, machine learning engineers and product managers to deliver AI-powered products that change how customers interact with their money.
  • Leverage a broad stack of technologies — Pytorch, AWS Ultraclusters, Huggingface, Lightning, VectorDBs, and more — to reveal the insights hidden within huge volumes of numeric and textual data.
  • Build AI foundation models through all phases of development, from design through training, evaluation, validation, and implementation.
  • Engage in high impact applied research to take the latest AI developments and push them into the next generation of customer experiences.
  • Flex your interpersonal skills to translate the complexity of your work into tangible business goals.

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Entry Level

Education Level

Ph.D. or professional degree

Number of Employees

5,001-10,000 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service