Chime-posted 2 months ago
$215,000 - $235,000/Yr
Full-time • Senior
Remote • San Francisco, CA
Credit Intermediation and Related Activities

We are seeking a Senior Software Engineer, Data Engineering at Chime's San Francisco, CA office. The base salary offered for this role will begin at $215,000 and up to $235,000. Salary is one part of Chime's competitive package. Offers are based on the candidate's experience and geographic location. In this role, you can expect to design strategies for enterprise databases, data warehouse systems, and multidimensional networks. Set standards for database operations, programming, query processes, and security. Model, design, and construct large relational databases or data warehouses. Create and optimize data models for warehouse infrastructure and workflow. Build a scalable data platform and pipelines that caters to a variety of domains across Chime. Build scalable data computation systems used by all Chimers. Architect and build workflows that could potentially become de facto standards for the fintech industry. Be a hands-on data engineer, building, scaling, and optimizing self-serve ETL frameworks that can handle batch processing and/or streaming. Own the ETL workflows and make sure the pipeline meets data quality and availability requirements. Work closely with other data engineering teams to integrate schema registry and establish data lineage for all data domains. Work closely with our stakeholder teams, like Data Science, Product Engineering, Analytics to help them with their data computation needs. Joint ownership of all aspects of data - data quality, data governance, data and schema design, data quality and security. Mentor and lead more junior engineers and help them improve their craft. Build and deploy production-quality data pipelines. Build solutions to provide visibility to partner teams using solid understanding of key metrics for data pipelines. Leverage hands-on experience with any of the Data warehouses like Snowflake, AWS Redshift, BigQuery, or Teradata. Use expertise with a commonly-used data programming language (for example, Python, Java, SQL). Leverage experience with Airflow, Terraform, and a Cloud Provider Stack (AWS, GCP, Azure). Utilize knowledge of data streaming technologies like Spark (AWS Glue), Flink, Storm, Kinesis, and/or Kafka. Some telecommuting is permitted.

  • Design strategies for enterprise databases, data warehouse systems, and multidimensional networks.
  • Set standards for database operations, programming, query processes, and security.
  • Model, design, and construct large relational databases or data warehouses.
  • Create and optimize data models for warehouse infrastructure and workflow.
  • Build a scalable data platform and pipelines that caters to a variety of domains across Chime.
  • Build scalable data computation systems used by all Chimers.
  • Architect and build workflows that could potentially become de facto standards for the fintech industry.
  • Be a hands-on data engineer, building, scaling, and optimizing self-serve ETL frameworks that can handle batch processing and/or streaming.
  • Own the ETL workflows and make sure the pipeline meets data quality and availability requirements.
  • Work closely with other data engineering teams to integrate schema registry and establish data lineage for all data domains.
  • Work closely with our stakeholder teams, like Data Science, Product Engineering, Analytics to help them with their data computation needs.
  • Joint ownership of all aspects of data - data quality, data governance, data and schema design, data quality and security.
  • Mentor and lead more junior engineers and help them improve their craft.
  • Build and deploy production-quality data pipelines.
  • Build solutions to provide visibility to partner teams using solid understanding of key metrics for data pipelines.
  • Master's degree in Computer/Data Science, Data Engineering or related field and 4 years of experience in the job offered or in a data engineer-related occupation.
  • At least 4 years of experience in data architecture including understanding the nuances involved in various database systems.
  • At least 4 years of experience in designing data systems including understanding of business objectives, tools, and data warehousing concepts.
  • At least 4 years of experience in handling big data using Spark and Dataflow.
  • At least 4 years of experience with Databricks including notebooks, feature store, and cluster usage.
  • At least 2 years of experience in writing SQL including complicated aggregations and sampling.
  • At least 2 years of experience in developing frameworks using Python and Java.
  • At least 2 years of experience in cloud systems including AWS, Azure, and Google Cloud.
  • Competitive salary based on experience.
  • 401k match plus great medical, dental, vision, life, and disability benefits.
  • Generous vacation policy and company-wide Chime Days, bonus company-wide paid days off.
  • 1% of your time off to support local community organizations of your choice.
  • Annual wellness stipend to use towards eligible wellness related expenses.
  • Up to 24 weeks of paid parental leave for birthing parents and 12 weeks of paid parental leave for non-birthing parents.
  • Access to Maven, a family planning tool, with $15k lifetime reimbursement for egg freezing, fertility treatments, adoption, and more.
  • In-person and virtual events to connect with your fellow Chimers.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service