Senior Data Engineer, Customer Operations

BlockSan Francisco, CA
10h

About The Position

Since we opened our doors in 2009, the world of commerce has evolved immensely, and so has Square. After enabling anyone to take payments and never miss a sale, we saw sellers stymied by disparate, outmoded products and tools that wouldn’t work together. So we expanded into software and started building integrated, omnichannel solutions – to help sellers sell online, manage inventory, offer buy now, pay later functionality, book appointments, engage loyal buyers, and hire and pay staff. Across it all, we’ve embedded financial services tools at the point of sale, so merchants can access a business loan and manage their cash flow in one place. Afterpay furthers our goal to provide omnichannel tools that unlock meaningful value and growth, enabling sellers to capture the next generation shopper, increase order sizes, and compete at a larger scale. Today, we are a partner to sellers of all sizes – large, enterprise-scale businesses with complex operations, sellers just starting, as well as merchants who began selling with Square and have grown larger over time. As our sellers grow, so do our solutions. There is a massive opportunity in front of us. We’re building a significant, meaningful, and lasting business, and we are helping sellers worldwide do the same. The Role The Customer Operations (Cust Ops) team at Block supports the handling of customer support and complaints cases across Block’s brands (Square, Cash App, Afterpay, Tidal, Proto). We work globally with partners in business, engineering, counsel, data science, ML, and product to provide world class support experiences, while ensuring consumer protection, and minimizing and potentially eliminating bad activity on our platform. You will report to a Customer Ops Data Engineering Manager. As a Data Engineer, you will handle everything from data architecture and modeling to data pipeline tooling and dashboarding. Our full-stack Data Engineers develop comprehensive reporting solutions, proactively innovating and adaptively responding to business needs. This involves everything from scoping and refining requirements to constructing robust data infrastructures and developing self-service reports and tools to address regulatory data requirements. You will enable other Cust Ops teams to make impactful business decisions by laying the foundation of our large and unique datasets that span across multiple products. Typical data you will work with includes: case details, agent performance (effectiveness and efficiency), agent details and staffing data, contact center health metrics across Phone, Messaging, Email, and Social channels, training data, CS financial data, customer info and retention data, resolution data, etc.

Requirements

  • A minimum of 8 years of related experience with a Bachelor’s degree; or 6 years and a Master’s degree; or equivalent experience.
  • High proficiency in SQL
  • Experience designing medium-to-large data engineering solutions and responsible for the entire lifecycle of projects including scoping, design, development, testing, deployment, and documentation
  • Experience with ETL scheduling technologies with dependency checking, such as Airflow or Prefect, as well as schema design and dimensional data modeling
  • Experience with setting up data quality and data lineage monitoring
  • Experience with Python, and Terraform

Nice To Haves

  • Experience with financial crimes compliance systems, technologies, and processes is a big plus
  • Experience driving data-driven decisions for AI initiatives / agent building is a big plus
  • Experience with compliance and/or regulatory data is a plus
  • Experience with creating data visualizations (Tableau, Looker, PowerBI, Mode, Qlik, etc.)
  • A strategic mindset that drives technical excellence while building consensus with multiple stakeholders and partners

Responsibilities

  • Operate across wide range of cross-functional teams and work to understand their needs in order to develop, deploy, maintain, and optimize data models, pipelines, ETL jobs and visualizations
  • Create brand new and optimize existing data models and schemas on top of Block data including but not limited to eventing, customer level, and process level data
  • Build monitoring to assess the health of the team’s infrastructure as well as data quality and lineage
  • Build on and promote an effective data strategy, including developing processes, policy, and infrastructure to help standardize business and product metric definitions
  • Troubleshoot technical issues with platforms, performance, data discrepancies, alerts etc
  • Participate in on-call rotation, monitor daily execution, diagnose and log issues, and fix business critical pipelines to ensure SLAs are met with internal stakeholders
  • Perform ad hoc data extractions to resolve critical business and infrastructure issues
  • Model data in Looker or similar visualization tools, to empower data access and self-service resources so your expertise can be leveraged where it is most impactful
  • Teach and encourage others to self-serve by building tools that make it simpler and faster for them to do so
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service