About The Position

Welo Data, a Welo Global brand, is the multilingual data and evaluation partner for foundation labs and enterprises deploying GenAI systems globally. They deliver the human judgment, data infrastructure, and evaluation systems that ensure AI models perform reliably across languages, cultures, and real-world contexts, at every stage from training through deployment. Its global network of 500,000+ vetted experts spans 300+ languages and locales, enabling high-quality multilingual data creation and structured model evaluation across the full spectrum of modern AI applications — from large language models and voice and speech systems to agentic workflows and robotics and embodied AI. This breadth of linguistic, cultural, and domain expertise enables Welo Data to address critical AI development challenges, including safety, bias, inclusivity, and cross-lingual reliability. A unified global operating model, led by specialized program and quality experts and grounded in assessment-driven talent selection, localized rubrics, and continuous calibration, ensures consistent performance across languages, domains, and modalities. Underpinning all of this is NIMO™ (Network Identity Management and Operations), Welo Data's proprietary identity and fraud-prevention framework. Built to maintain data integrity and workforce trust across a global contributor base, NIMO combines advanced verification, continuous monitoring, and structured QA to ensure every dataset is accurate, traceable, and culturally grounded. welodata.ai The Quality Analytics Lead is the dedicated technical resource bridging Welo Data’s Analytics and Quality organizations. Sitting within the Analytics team, this senior IC partners enterprise-wide with Quality Managers, Analysts, and leadership to design and maintain the data models, measurement frameworks, and analytical infrastructure that power evidence-based quality decisions across programs and regions. At its core, this is an analytics engineering role. The primary responsibility is building and owning the quality data layer — the dbt models, data marts, and Python-driven modeling that transform raw operational data into a trusted, well-documented foundation the Quality organization can rely on. Experimentation, stakeholder consulting, and BI delivery are all extensions of that foundation, not parallel tracks. The ideal candidate combines deep fluency in modern data modeling with a genuine understanding of quality operations, AI training data workflows, and experimental design. They ensure that the analytical systems they build directly improve how quality teams detect issues, validate improvements, and demonstrate impact to clients and leadership. As Welo Data’s quality analytics capability matures, this role is positioned to grow into the foundation of a dedicated Quality Analytics function — making it a compelling opportunity for someone who wants to build something meaningful from the ground up.

Requirements

  • 5+ years in a data analytics, analytics engineering, or data modeling role with demonstrated ownership of analytical data products in a production environment.
  • Proven experience designing and building dbt models, including mart architecture, testing, documentation, and version-controlled development workflows.
  • Strong Python proficiency for data analysis and modeling (e.g., pandas, numpy, statsmodels, or equivalent).
  • dbt (models, marts, tests, documentation)
  • Python (data analysis and modeling)
  • SQL (advanced)
  • Git/version control

Nice To Haves

  • Exposure to quality operations, AI training data workflows, annotation platforms, or BPO/localization environments.
  • Familiarity with QA frameworks, sampling methodology, CAPA processes, rubric design, or quality management systems in a data-intensive context.
  • Experience working in an embedded analytics role supporting an operational team, with accountability for both analytical outputs and the underlying data infrastructure.
  • Proficiency with BI tools — Power BI preferred — for delivering analytical outputs to non-technical stakeholders.
  • Familiarity with ELT/pipeline tooling (e.g., Matillion, Fivetran, or equivalent) and how data flows from operational systems into analytics-ready layers.
  • Power BI or equivalent BI platform
  • ELT pipeline tooling
  • Statistical modeling libraries (Python)
  • Familiarity with data warehouse environments (e.g., Snowflake, BigQuery, or similar).

Responsibilities

  • Design, build, and maintain dbt models and data marts that serve the Quality organization’s enterprise reporting needs — covering throughput, accuracy, defect rates, CAPA effectiveness, annotator/rater performance, and program-level quality health.
  • Use Python for higher-order data modeling tasks including cohort analysis, performance trend modeling, and custom aggregations that go beyond standard SQL/dbt scope.
  • Partner with data engineers to define source data requirements, document data lineage, and ensure quality data is reliable, consistent, and analytics-ready.
  • Own the quality analytics data layer end-to-end: from raw operational inputs to clean, tested, well-documented marts consumed by dashboards, reports, and ad hoc analyses.
  • Apply dbt testing, documentation, and best practices to build a trusted, maintainable codebase that scales as new programs and data sources are onboarded.
  • Collaborate with Quality Managers and Analysts to define, standardize, and operationalize quality metrics — including accuracy rates, defect categorization, sampling coverage, inter-rater agreement, and CAPA closure effectiveness — consistently across all programs.
  • Design measurement frameworks aligned to acceptance criteria and quality thresholds, ensuring metrics faithfully reflect program health and client commitments.
  • Support rubric and guideline effectiveness measurement, helping quality teams understand whether their standards produce consistent, measurable outcomes across annotators and raters.
  • Champion data quality governance within the Quality org: own metric definitions, threshold documentation, and analytical methodology standards to reduce inconsistency and reporting variance.
  • Define enterprise-level quality dashboards in partnership with BI resources, translating mart output into clear, decision-ready views for Quality Managers through to senior leadership.
  • Design and execute A/B tests and controlled experiments to measure the impact of quality interventions, process changes, and annotator training programs — applying proper power analysis, significance testing, and results interpretation.
  • Build success validation frameworks to confirm that CAPA actions and process improvements produce measurable, sustained outcomes — not just short-term fluctuations.
  • Develop performance attribution models that quantify the contribution of specific quality initiatives to outcome improvements, separating causal signal from noise in program performance trends.
  • Apply statistical methods to sampling design, audit analysis, and error pattern detection, surfacing systemic quality issues and their root causes with data-backed evidence.
  • Conduct pre/post analyses for major quality program changes, training rollouts, and rubric updates, delivering clear impact assessments to quality leadership and clients.
  • Act as the analytical partner to Quality Managers and senior quality leadership, translating complex data models and analytical findings into clear, actionable insights for program decisions.
  • Produce client-ready analytical deliverables — including quality performance summaries, trend analyses, and post-mortem reports — that Quality Managers can present in client governance reviews and executive forums.
  • Proactively monitor quality performance data to identify emerging risks and flag issues to quality leadership before they escalate into client-impacting problems.
  • Lead discovery conversations with quality stakeholders to understand their data needs, translate them into well-scoped analytical requirements, and ensure delivered solutions address the actual decision being made.
  • Coach quality team members on data-driven decision making — helping them frame analytical questions, interpret results, and design measurement into their processes from the start.
  • Maintain and prioritize a backlog of analytics projects in support of the Quality organization’s evolving needs, balancing quick-turn analyses with longer-term data infrastructure investments.
  • Identify and implement opportunities to automate recurring quality reporting and analysis, reducing manual effort for quality teams and improving consistency and timeliness.
  • Maintain and update a backlog/roadmap spanning multiple workstreams, regularly communicating progress, blockers, and trade-offs to Analytics and Quality leadership.
  • Stay current on emerging best practices in quality analytics, experimental design, and AI evaluation methodology, recommending new approaches where they would meaningfully improve outcomes.
  • As this function matures, lay the groundwork for a dedicated Quality Analytics capability: document processes, build reusable frameworks, and onboard any future team members.
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service