Staff Data Science Engineer

QorvoRichardson, TX

About The Position

Qorvo (Nasdaq: QRVO) supplies innovative semiconductor solutions that make a better world possible. We combine product and technology leadership, systems-level expertise and global manufacturing scale to quickly solve our customers' most complex technical challenges. Qorvo serves multiple high-growth segments of large global markets, including consumer electronics, smart home/IoT, automotive, EVs, battery-powered appliances, network infrastructure, healthcare and aerospace/defense. Visit www.qorvo.com to learn how our innovative team is helping connect, protect and power our planet. We are looking for a Staff Data Science Engineer to lead the design and delivery of scalable data science, machine learning, and analytics solutions that create measurable business value across the enterprise. This role sits at the intersection of data science, data engineering, analytics engineering, and AI productization. The right person will pair strong technical depth with practical business judgment, helping turn complex data into decisions, tools, and systems that improve operations, reduce cost, accelerate insight, and scale AI adoption. This is a senior individual contributor role for someone who can operate as a technical leader across functions, influence stakeholders from engineers to executives, and build robust solutions in environments where data quality, governance, speed, and return on investment all matter. In the current integration environment, this role must also work effectively within approved collaboration and information-sharing processes, including formal handling of cross-company meetings, data requests, documentation, and CSI-sensitive workflows described in the Project Comet guidance.

Requirements

  • Bachelor’s degree in Data Science, Computer Science, Statistics, Engineering, Applied Mathematics, or a related technical field.
  • 8+ years of experience in data science, machine learning, analytics engineering, or data platform development, including experience delivering business-facing solutions in production.
  • Strong programming skills in Python and SQL.
  • Deep experience with statistical analysis, machine learning, feature engineering, model evaluation, and experimental design.
  • Strong experience building data pipelines and working with modern data platforms and cloud analytics ecosystems.
  • Demonstrated ability to own ambiguous, high-impact problems and drive them through to adoption.
  • Experience partnering with senior stakeholders and influencing decisions across technical and non-technical groups.
  • Strong written and verbal communication skills, including the ability to explain complex concepts clearly to executives and business partners.
  • Proven ability to mentor others and lead technically without direct authority.
  • Python, SQL
  • Machine learning, statistics, optimization, experimentation
  • Data modeling, ETL/ELT, analytics engineering
  • Cloud and distributed data platforms
  • BI and visualization tools
  • Git-based development workflows and production-quality software practices
  • MLOps and model lifecycle management
  • Data governance, documentation, and reproducibility

Nice To Haves

  • Advanced degree in a quantitative or technical field.
  • Experience in semiconductor, manufacturing, operations, supply chain, quality, or related industrial domains.
  • Experience building and operationalizing AI/ML solutions at enterprise scale.
  • Experience with MLOps, model monitoring, and deployment workflows.
  • Experience with Databricks, Spark, orchestration tools, BI platforms, and modern software engineering practices.
  • Familiarity with secure data environments and regulated data handling.
  • Experience working in environments that require balancing innovation with compliance, governance, and business urgency.
  • Exposure to enterprise AI enablement, internal tooling, or organization-wide adoption programs.

Responsibilities

  • Lead the architecture and implementation of production-grade data science and machine learning solutions, from problem framing through deployment and adoption.
  • Build scalable data products, models, and decision-support tools using statistical methods, machine learning, optimization, and modern analytics engineering practices.
  • Partner with business leaders, engineering, IT, manufacturing, quality, finance, and other cross-functional teams to identify high-value opportunities and prioritize work with clear business impact.
  • Translate ambiguous business problems into structured analytical approaches, measurable success criteria, and deliverable roadmaps.
  • Design and maintain reliable data pipelines, feature pipelines, experimentation frameworks, and model monitoring practices.
  • Drive the responsible use of AI across the organization by developing reusable frameworks, templates, evaluation approaches, and best practices for enterprise adoption.
  • Serve as a technical mentor to data scientists, analysts, and engineers; raise the bar on coding, experimentation, documentation, and stakeholder communication.
  • Create executive-ready narratives, visualizations, and recommendations that connect technical findings to business outcomes.
  • Partner with data platform and governance teams to ensure solutions meet requirements for security, compliance, and maintainability.
  • Help shape standards for model lifecycle management, MLOps, analytics engineering, and AI solution delivery.
  • Contribute to integration planning and enterprise analytics initiatives while following approved protocols for meetings, shared materials, data requests, and CSI/non-CSI handling where applicable. Project Comet guidance requires legally approved agendas for certain new cross-company meetings, use of the Data Request List for shared data, and routing potentially sensitive data through the appropriate review path or clean room process.
  • Build predictive and optimization models that improve yield, quality, throughput, cost, or planning.
  • Develop AI-enabled tools that scale analyst and engineer productivity.
  • Create reusable data products that standardize metrics, reduce manual effort, and improve decision speed.
  • Lead diagnostic and exploratory analyses on complex manufacturing, product, or enterprise datasets.
  • Establish frameworks for model governance, evaluation, and business adoption.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service