Walmart-posted 2 days ago
Full-time • Mid Level
Onsite • Bellevue, WA
5,001-10,000 employees

Position Summary... What you'll do... Cortex Team is Walmart’s core A.I. conversational platform, powering the vision of delivering the world’s best personal assistants to Walmart’s customers, accessible via natural voice commands, text messages, rich UI interactions, and a mix of all of the above via multi-modal experiences.We believe conversations are a natural and powerful user interface for interacting with technology and enable a richer customer experiences – both online and in-store. We are building and designing the next generation of Natural Language Understanding (NLU) services that other teams can easily integrate and leverage, and build rich experiences: from pure voice and text shopping assistants (Siri, Sparky ), to customer care channels, to mobile apps with rich, intertwined, multi-modal interaction modes ( Me@Walmart ).Interested in diving in?We need solid engineers with the talent and expertise required to design, build, improve and evolve our capabilities in at least some of the following areas: Service oriented architecture in charge of exposing our NLU capabilities at scale, and enabling increasingly sophisticated model orchestration. Since the service takes in traffic for a large set of Walmart customers (that is 80% of American households!), you will get to solve non trivial challenges in terms of service scalability and availability. You will design and build the primitives to efficiently orchestrate model-serving microservices, taking into account their dependencies, and improving the combined latency and robustness of such microservices (e.g. fan out in parallel to N services for a single request, and reply with whichever gives the fastest answer). You will also bake-in functionality which can drive improved machine learning modeling and experimental design, such as A/B testing. Model serving and operations There is a constant tension between model improvements (more computations) and model serving latency. So, we are always in a quest of crunching more numbers, while preserving our SLAs, and controlling the operational costs. You will guide our efforts to always find the best tradeoffs in terms of architecture, tooling (Tensorflow serving? / VLLM? / Triton?) and infrastructure (CPU? / GPU?, GCP? / Azure?) for model serving – based on the latest model developments and product requirements. In particular, you will drive principled and scientific load-testing efforts, to clearly identify the tradeoffs at hands, and tune/optimize the model-serving stack. If interested, you will also get some opportunity to work on prompt engineering and agentic systems. Tooling, infrastructure and pipelines for reproducible workflow and models, enabling rapid innovation across the entire product lifecycle. You will author and maintain pipelines that safely build and deploy models to production via continuous deployment. You will achieve scalable and efficient resource management capabilities (cloud infrastructure). You will provide robust and built-in diagnostics for quality control throughout. You will integrate – or build – labeling tools which can seamlessly integrate at the heart of our conversation data store (GCP, BigQuery) and intertwine multiple labeling sources of various confidence levels. Come at the right time, and you will have an enormous opportunity to make a massive impact on the design, architecture, and implementation of an innovative, mission critical product, used every day, by people you know, and which customers love.As part of the emerging tech group, you will also have the additional opportunity of building demos, proof of concepts, creating white papers, writing blogs, etc.Note that this is not a fully remote job, you are required to come to the office (currently at least 2 days a week).

  • Solid data skills, sound computer-science fundamentals, and strong programming experience.
  • Deep hands-on technical expertise in full-stack development.
  • Programming experience with at least one modern language with an efficient runtime, such as Scala, Java, C++, or C#.
  • Experience with at least one relational database technology such as MySQL, PostgreSQL, Oracle, or MS SQL.
  • Some level of fluency in Python (lingua-franca of our data-scientists).
  • Understanding of the challenge of distributed data-processing at scale.
  • Deal well with ambiguous/undefined problems; ability to think abstractly.
  • Ability to take a project from scoping requirements through actual launch.
  • A continuous drive to explore, improve, enhance, automate, and optimize systems and tools.
  • Capacity to apply scientific analysis and mathematical modeling techniques to predict, measure and evaluate the consequences of designs and the ongoing success of our platform.
  • Excellent oral and written communication skills.
  • Bachelor’s degree or certification in Computer Science, Engineering, Mathematics, or any other related field.
  • Option 1: Bachelor's degree in computer science, computer engineering, computer information systems, software engineering, or related area and 4 years’ experience in software engineering or related area.
  • Option 2: 6 years’ experience in software engineering or related area.
  • Affinity for prompt engineering
  • Large scale distributed systems experience, including scalability and fault tolerance.
  • Experience taking a leading role in building complex data-driven software systems successfully delivered to customers
  • Relentless focus on scalability, latency, performance robustness, and cost trade-offs – especially those present in highly virtualized, elastic, cloud-based environments.
  • Exposure to cloud infrastructure, such as Open Stack, Azure, GCP, or AWS as well as infrastructure management tech (Docker, Kubernetes)
  • Experience building/operating highly available systems of data extraction, ingestion, and massively parallel processing for large data sets. In particular experience in building large scale data pipelines using big data technologies (e.g. Spark / Kafka / Cassandra / Hadoop / Hive / BigQuery / Presto / Airflow).
  • Hands-on expertise in many disparate technologies, typically ranging from front-end user interfaces through to back-end systems and all points in between.
  • Familiarity with Machine Learning concepts & processes
  • Masters or PhD in Computer Science, Physics, Engineering, Math, or equivalent.
  • Master’s degree in Computer Science, Computer Engineering, Computer Information Systems, Software Engineering, or related area and 2 years' experience in software engineering or related area
  • We value candidates with a background in creating inclusive digital experiences, demonstrating knowledge in implementing Web Content Accessibility Guidelines (WCAG) 2.2 AA standards, assistive technologies, and integrating digital accessibility seamlessly.
  • The ideal candidate would have knowledge of accessibility best practices and join us as we continue to create accessible products and services following Walmart’s accessibility standards and guidelines for supporting an inclusive culture.
  • Beyond our great compensation package, you can receive incentive awards for your performance.
  • Other great perks include 401(k) match, stock purchase plan, paid maternity and parental leave, PTO, multiple health plans, and much more.
  • Health benefits include medical, vision and dental coverage.
  • Financial benefits include 401(k), stock purchase and company-paid life insurance.
  • Paid time off benefits include PTO (including sick leave), parental leave, family care leave, bereavement, jury duty, and voting.
  • Other benefits include short-term and long-term disability, company discounts, Military Leave Pay, adoption and surrogacy expense reimbursement, and more.
  • You will also receive PTO and/or PPTO that can be used for vacation, sick leave, holidays, or other purposes.
  • Live Better U is a Walmart-paid education benefit program for full-time and part-time associates in Walmart and Sam's Club facilities.
  • Tuition, books, and fees are completely paid for by Walmart.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service