Lead Software Engineer - Big Data in Cupertino, CA

US BankCupertino, CA
234d$193,419 - $218,400Remote

About The Position

U.S. Bank is seeking the position of Lead Software Engineer - Big Data in Cupertino, CA. This position is with Talech, Inc., a U.S. Bank company. The Lead Software Engineer - Big Data will build and implement the data platform framework to enable reporting of sales and orders to customers of Talech POS solution; build Apache Spark pipelines to write time series data to Apache Druid for time series reporting of customer sales and orders; collaborate with product managers and API teams to identify any data quality issues and address them in the pipeline implementation; build a Spark bulk loading process to Druid for different ETL's that allows customers to query and visualize sales/refunds and tax related time-series data for custom timeframes; and architect and design new solutions and frameworks for data platform team as part of tech modernization journey. Position may allow working from home within commuting distance of worksite location. Multiple positions.

Requirements

  • Bachelor's degree in Computer Science or Software Engineering.
  • 5 years of experience as a Senior Data Engineer, Senior Software Engineer, Big Data Engineer, or related.
  • Experience with designing, developing, testing, operating, and maintaining products.
  • Experience with full stack ownership and writing production-ready and testable code.
  • Experience with creating optimal designs adhering to architectural best practices.
  • Experience with scalability, reliability, and performance considerations in technical designs.
  • Experience with analyzing failures and proposing design changes.
  • Experience with making sound design/coding decisions focused on customer experience.
  • Experience with conducting code reviews and ensuring compliance with development procedures.
  • Experience with compliance and security best practices in product development.
  • Experience with software reliability engineering standards.
  • Experience with prioritizing and sizing tasks for incremental delivery.
  • Experience with anticipating and communicating blockers and delays.
  • Proficiency in Java, Python, Spark/PySpark, Kafka, AWS, Docker, Airflow, Apache Druid, EMR, S3, Kubernetes, BigQuery, and MySQL.

Responsibilities

  • Build and implement the data platform framework for reporting sales and orders.
  • Build Apache Spark pipelines to write time series data to Apache Druid.
  • Collaborate with product managers and API teams to identify and address data quality issues.
  • Build a Spark bulk loading process to Druid for different ETLs.
  • Architect and design new solutions and frameworks for the data platform team.

Benefits

  • Healthcare (medical, dental, vision)
  • Basic term and optional term life insurance
  • Short-term and long-term disability
  • Pregnancy disability and parental leave
  • 401(k) and employer-funded retirement plan
  • Paid vacation (from two to five weeks depending on salary grade and tenure)
  • Up to 11 paid holiday opportunities
  • Adoption assistance
  • Sick and Safe Leave accruals of one hour for every 30 worked, up to 80 hours per calendar year.

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Senior

Industry

Credit Intermediation and Related Activities

Education Level

Bachelor's degree

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service