Quality Analytics Lead

WelocalizePortland, OR

About The Position

The Quality Analytics Lead is the dedicated technical resource bridging Welo Data’s Analytics and Quality organizations. Sitting within the Analytics team, this senior IC partners enterprise-wide with Quality Managers, Analysts, and leadership to design and maintain the data models, measurement frameworks, and analytical infrastructure that power evidence-based quality decisions across programs and regions. At its core, this is an analytics engineering role. The primary responsibility is building and owning the quality data layer — the dbt models, data marts, and Python-driven modeling that transform raw operational data into a trusted, well-documented foundation the Quality organization can rely on. Experimentation, stakeholder consulting, and BI delivery are all extensions of that foundation, not parallel tracks. The ideal candidate combines deep fluency in modern data modeling with a genuine understanding of quality operations, AI training data workflows, and experimental design. They ensure the analytical systems they build directly improve how quality teams detect issues, validate improvements, and demonstrate impact to clients and leadership.

Requirements

  • Bachelor’s degree or equivalent work experience in Computer Science, Data Science, Statistics, Engineering, or a related quantitative field.
  • 5+ years in a data analytics, analytics engineering, or data modeling role with demonstrated ownership of analytical data products in a production environment.
  • Proven experience designing and building dbt models, including mart architecture, testing, documentation, and version-controlled development workflows.
  • Strong Python proficiency for data analysis and modeling (e.g., pandas, numpy, statsmodels, or equivalent).
  • Advanced SQL skills for complex analytical queries, data exploration, and data quality validation.
  • Hands-on experience designing and analyzing controlled experiments or A/B tests, including statistical significance testing, power analysis, and practical results interpretation.
  • Demonstrated ability to translate business requirements from non-technical operational stakeholders into well-scoped analytical solutions.
  • Technical rigor with operational empathy: the ability to deeply understand quality teams’ day-to-day challenges and translate them into well-designed, purposeful analytical solutions — not over-engineered abstractions.
  • Strong analytical and statistical reasoning, including applied experience with experimental design, performance attribution, and hypothesis testing in messy, real-world operational data.
  • Exceptional communication: able to translate complex data models and analytical findings into plain-language insights for quality managers, senior leadership, and clients across a diverse range of technical literacy levels.
  • Self-directed and proactive: comfortable managing a diverse project backlog with competing priorities, delivering consistently without close supervision, and raising blockers early and clearly.
  • Collaborative and intellectually curious: genuinely interested in understanding quality processes and domain context deeply enough to ask the right questions before building.
  • Growth orientation: excited about building a new function from the ground up, and committed to documenting, scaling, and sharing work in a way that creates lasting organizational value.
  • dbt (models, marts, tests, documentation)
  • Python (data analysis and modeling)
  • SQL (advanced)
  • Git/version control

Nice To Haves

  • Post-graduate education or equivalent professional experience in analytics, data modeling, or data engineering.
  • Exposure to quality operations, AI training data workflows, annotation platforms, or BPO/localization environments.
  • Familiarity with QA frameworks, sampling methodology, CAPA processes, rubric design, or quality management systems in a data-intensive context.
  • Experience working in an embedded analytics role supporting an operational team, with accountability for both analytical outputs and the underlying data infrastructure.
  • Proficiency with BI tools — Power BI preferred — for delivering analytical outputs to non-technical stakeholders.
  • Familiarity with ELT/pipeline tooling (e.g., Matillion, Fivetran, or equivalent) and how data flows from operational systems into analytics-ready layers.
  • Power BI or equivalent BI platform
  • ELT pipeline tooling
  • statistical modeling libraries (Python)
  • familiarity with data warehouse environments (e.g., Snowflake, BigQuery, or similar)

Responsibilities

  • Quality Data Modeling & Analytics Infrastructure Design, build, and maintain dbt models and data marts that serve the Quality organization’s enterprise reporting needs — covering throughput, accuracy, defect rates, CAPA effectiveness, annotator/rater performance, and program-level quality health.
  • Use Python for higher-order data modeling tasks including cohort analysis, performance trend modeling, and custom aggregations that go beyond standard SQL/dbt scope.
  • Partner with data engineers to define source data requirements, document data lineage, and ensure quality data is reliable, consistent, and analytics-ready.
  • Own the quality analytics data layer end-to-end: from raw operational inputs to clean, tested, well-documented marts consumed by dashboards, reports, and ad hoc analyses.
  • Apply dbt testing, documentation, and best practices to build a trusted, maintainable codebase that scales as new programs and data sources are onboarded.
  • Collaborate with Quality Managers and Analysts to define, standardize, and operationalize quality metrics — including accuracy rates, defect categorization, sampling coverage, inter-rater agreement, and CAPA closure effectiveness — consistently across all programs.
  • Design measurement frameworks aligned to acceptance criteria and quality thresholds, ensuring metrics faithfully reflect program health and client commitments.
  • Support rubric and guideline effectiveness measurement, helping quality teams understand whether their standards produce consistent, measurable outcomes across annotators and raters.
  • Champion data quality governance within the Quality org: own metric definitions, threshold documentation, and analytical methodology standards to reduce inconsistency and reporting variance.
  • Define enterprise-level quality dashboards in partnership with BI resources, translating mart output into clear, decision-ready views for Quality Managers through to senior leadership.
  • Analyze patterns in model evaluation outcomes, annotator disagreement, and guideline interpretation to surface systemic issues in AI training data and evaluation processes.
  • Design and execute A/B tests and controlled experiments to measure the impact of quality interventions, process changes, and annotator training programs — applying proper power analysis, significance testing, and results interpretation.
  • Build success validation frameworks to confirm that CAPA actions and process improvements produce measurable, sustained outcomes — not just short-term fluctuations.
  • Develop performance attribution models that quantify the contribution of specific quality initiatives to outcome improvements, separating causal signal from noise in program performance trends.
  • Apply statistical methods to sampling design, audit analysis, and error pattern detection, surfacing systemic quality issues and their root causes with data-backed evidence.
  • Conduct pre/post analyses for major quality program changes, training rollouts, and rubric updates, delivering clear impact assessments to quality leadership and clients.
  • Act as the analytical partner to Quality Managers (P2–L2) and senior quality leadership, translating complex data models and analytical findings into clear, actionable insights for program decisions.
  • Produce client-ready analytical deliverables — including quality performance summaries, trend analyses, and post-mortem reports — that Quality Managers can present in client governance reviews and executive forums.
  • Proactively monitor quality performance data to identify emerging risks and flag issues to quality leadership before they escalate into client-impacting problems.
  • Lead discovery conversations with quality stakeholders to understand their data needs, translate them into well-scoped analytical requirements, and ensure delivered solutions address the actual decision being made.
  • Coach quality team members on data-driven decision making — helping them frame analytical questions, interpret results, and design measurement into their processes from the start.
  • Maintain and prioritize a backlog of analytics projects in support of the Quality organization’s evolving needs, balancing quick-turn analyses with longer-term data infrastructure investments.
  • Identify and implement opportunities to automate recurring quality reporting and analysis, reducing manual effort for quality teams and improving consistency and timeliness.
  • Maintain and update a backlog/roadmap spanning multiple workstreams, regularly communicating progress, blockers, and trade-offs to Analytics and Quality leadership.
  • Stay current on emerging best practices in quality analytics, experimental design, and AI evaluation methodology, recommending new approaches where they would meaningfully improve outcomes.
  • As this function matures, lay the groundwork for a dedicated Quality Analytics capability: document processes, build reusable frameworks, and onboard any future team members.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service