We are seeking a Senior Software Engineer, Data Engineering at Chime's San Francisco, CA office. The base salary offered for this role will begin at $215,000 and up to $235,000. Salary is one part of Chime's competitive package. Offers are based on the candidate's experience and geographic location. In this role, you can expect to design strategies for enterprise databases, data warehouse systems, and multidimensional networks. Set standards for database operations, programming, query processes, and security. Model, design, and construct large relational databases or data warehouses. Create and optimize data models for warehouse infrastructure and workflow. Build a scalable data platform and pipelines that caters to a variety of domains across Chime. Build scalable data computation systems used by all Chimers. Architect and build workflows that could potentially become de facto standards for the fintech industry. Be a hands-on data engineer, building, scaling, and optimizing self-serve ETL frameworks that can handle batch processing and/or streaming. Own the ETL workflows and make sure the pipeline meets data quality and availability requirements. Work closely with other data engineering teams to integrate schema registry and establish data lineage for all data domains. Work closely with our stakeholder teams, like Data Science, Product Engineering, Analytics to help them with their data computation needs. Joint ownership of all aspects of data - data quality, data governance, data and schema design, data quality and security. Mentor and lead more junior engineers and help them improve their craft. Build and deploy production-quality data pipelines. Build solutions to provide visibility to partner teams using solid understanding of key metrics for data pipelines. Leverage hands-on experience with any of the Data warehouses like Snowflake, AWS Redshift, BigQuery, or Teradata. Use expertise with a commonly-used data programming language (for example, Python, Java, SQL). Leverage experience with Airflow, Terraform, and a Cloud Provider Stack (AWS, GCP, Azure). Utilize knowledge of data streaming technologies like Spark (AWS Glue), Flink, Storm, Kinesis, and/or Kafka. Some telecommuting is permitted.