Distinguished Data Engineer

Capital OneNew York, NY

About The Position

Distinguished Data Engineers are individual contributors who strive to be diverse in thought so we visualize the problem space. At Capital One, diversity of thought strengthens the ability to influence, collaborate and provide innovative solutions across organizational boundaries. Distinguished Engineers will significantly impact the company's trajectory and devise clear roadmaps to deliver next generation technology solutions. They will work alongside a talented team of developers, machine learning experts, product managers and people leaders. These engineers are leading experts in their domains, helping devise practical and reusable solutions to complex problems, and will drive innovation at multiple levels, optimizing business outcomes while driving towards strong technology solutions. They will promote a culture of engineering excellence, balancing expertise with an inclusive environment where others' ideas can be heard and championed. They will lead in creating next-generation talent for Capital One Tech, mentoring internal talent and actively recruiting to build the community. Distinguished Engineers are expected to lead through technical contribution, operating as trusted advisors for key technologies, platforms and capability domains, creating clear and concise communications, code samples, blog posts and other material to share knowledge both inside and outside the organization. They will specialize in a particular subject area, but their input and impact will be sought and expected throughout the organization. They are deep technical experts and thought leaders who accelerate adoption of best engineering practices, maintain knowledge on industry innovations, trends and practices, and are visionaries collaborating on Capital One’s toughest issues to deliver on business needs. They act as role models and mentors, coaching and strengthening the technical expertise of the engineering and product community, and evangelists, both internally and externally, elevating the Distinguished Engineering community and establishing themselves as go-to resources on given technologies and technology-enabled capabilities.

Requirements

  • Bachelor’s Degree
  • At least 7 years of experience in data engineering
  • At least 3 years of experience in data architecture
  • At least 2 years of experience building applications in AWS

Nice To Haves

  • Masters’ Degree
  • 9+ years of experience in data engineering
  • 3+ years of data modeling experience
  • 2+ years of experience with ontology standards for defining a domain
  • 2+ years of experience using Python, SQL or Scala
  • 1+ year of experience deploying machine learning models
  • 3+ years of experience implementing big data processing solutions on AWS
  • 10+ years of hands-on experience developing and architecting solutions on AWS.
  • Deep proficiency and strategic experience with Databricks (e.g., Delta Lake, Unity Catalog, ML flow, performance tuning Spark workloads).
  • Deep proficiency and strategic experience with Snowflake (e.g., Data Sharing, Snowpipe, external tables, security features, cost governance, advanced SQL/Stored Procedures).
  • Deep practical knowledge of core AWS data services such as Amazon S3, EMR, Glue, Kinesis, and Lambda, and how they integrate with Databricks and Snowflake.
  • Experience with Infrastructure as Code (IaC) tools (e.g., Terraform) to automate the deployment and management of Databricks and Snowflake environments on AWS.
  • Relevant professional certifications such as AWS Certified Solutions Architect - Professional or Snowflake/Databricks certifications (e.g., Certified SnowPro Advanced Architect).
  • 10+ years of professional experience coding in commonly used languages like Java, Python, Go, JavaScript/TypeScript, Swift, etc.
  • 8+ years of professional experience in the full lifecycle of system development, from conception through architecture, implementation, testing, deployment and production support.
  • Experience in applying Artificial Intelligence or Machine Learning concepts to engineering challenges (e.g., anomaly detection, test optimization, intelligent testing).
  • Deep practical knowledge of Site Reliability Engineering (SRE) principles, chaos engineering, and advanced Observability tooling (e.g., Open Telemetry, Prometheus, Tracing)
  • Experience in implementing Artificial Intelligence or Artificial Intelligence-enabled solutions.

Responsibilities

  • Build awareness, increase knowledge and drive adoption of modern technologies, sharing consumer and engineering benefits to gain buy-in.
  • Strike the right balance between lending expertise and providing an inclusive environment where others’ ideas can be heard and championed; leverage expertise to grow skills in the broader Capital One team
  • Promote a culture of engineering excellence, using opportunities to reuse and inner source solutions where possible
  • Effectively communicate with and influence key stakeholders across the enterprise, at all levels of the organization
  • Operate as a trusted advisor for a specific technology, platform or capability domain, helping to shape use cases and implementation in an unified manner
  • Lead the way in creating next-generation talent for Tech, mentoring internal talent and actively recruiting external talent to bolster Capital One’s Tech talent.
  • Define and champion the strategic roadmap for the adoption, governance, and cost-optimization of the data ecosystem leveraging Databricks and Snowflake on the AWS cloud.
  • Lead the design and implementation of highly scalable, fault-tolerant, and cost-effective data architectures that seamlessly integrate Databricks (for complex processing/ML) and Snowflake (for warehousing/BI).
  • Serve as the primary technical authority for data security and governance best practices within Databricks and Snowflake, ensuring integration with core AWS security services (e.g., IAM, KMS).
  • Drive performance engineering and optimization for large-scale data ingestion and processing workloads across the Databricks/Snowflake/AWS data pipeline.
  • Mentor engineering teams on the advanced features, architecture, and cost-efficient usage of Databricks, Snowflake, and related AWS services.
  • Articulate and evangelize a bold technical vision for your domain
  • Decompose complex problems into practical and operational solutions
  • Ensure the quality of technical design and implementation
  • Serve as an authoritative expert on non-functional system characteristics, such as performance, scalability and operability
  • Continue learning and injecting advanced technical knowledge into our community
  • Handle several projects simultaneously, balancing your time to maximize impact
  • Act as a role model and mentor within the tech community, helping to coach and strengthen the technical expertise and know-how of our engineering and product community
  • Ensure Code is of the highest quality & standard while being an active contributor and reviewer on critical repos of the application
  • Develop full stack applications with a product engineering mindset, spanning frontend and backend ecosystems that balance simplicity with flexibility

Benefits

  • Capital One offers a comprehensive, competitive, and inclusive set of health, financial and other benefits that support your total well-being.
  • Performance based incentive compensation, which may include cash bonus(es) and/or long term incentives (LTI).
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service