Position Summary... What you'll do... Cortex Team is Walmart’s core A.I. conversational platform, powering the vision of delivering the world’s best personal assistants to Walmart’s customers, accessible via natural voice commands, text messages, rich UI interactions, and a mix of all of the above via multi-modal experiences.We believe conversations are a natural and powerful user interface for interacting with technology and enable a richer customer experiences – both online and in-store. We are building and designing the next generation of Natural Language Understanding (NLU) services that other teams can easily integrate and leverage, and build rich experiences: from pure voice and text shopping assistants (Siri, Sparky ), to customer care channels, to mobile apps with rich, intertwined, multi-modal interaction modes ( Me@Walmart ).Interested in diving in?We need solid engineers with the talent and expertise required to design, build, improve and evolve our capabilities in at least some of the following areas: Service oriented architecture in charge of exposing our NLU capabilities at scale, and enabling increasingly sophisticated model orchestration. Since the service takes in traffic for a large set of Walmart customers (that is 80% of American households!), you will get to solve non trivial challenges in terms of service scalability and availability. You will design and build the primitives to efficiently orchestrate model-serving microservices, taking into account their dependencies, and improving the combined latency and robustness of such microservices (e.g. fan out in parallel to N services for a single request, and reply with whichever gives the fastest answer). You will also bake-in functionality which can drive improved machine learning modeling and experimental design, such as A/B testing. Model serving and operations There is a constant tension between model improvements (more computations) and model serving latency. So, we are always in a quest of crunching more numbers, while preserving our SLAs, and controlling the operational costs. You will guide our efforts to always find the best tradeoffs in terms of architecture, tooling (Tensorflow serving? / VLLM? / Triton?) and infrastructure (CPU? / GPU?, GCP? / Azure?) for model serving – based on the latest model developments and product requirements. In particular, you will drive principled and scientific load-testing efforts, to clearly identify the tradeoffs at hands, and tune/optimize the model-serving stack. If interested, you will also get some opportunity to work on prompt engineering and agentic systems. Tooling, infrastructure and pipelines for reproducible workflow and models, enabling rapid innovation across the entire product lifecycle. You will author and maintain pipelines that safely build and deploy models to production via continuous deployment. You will achieve scalable and efficient resource management capabilities (cloud infrastructure). You will provide robust and built-in diagnostics for quality control throughout. You will integrate – or build – labeling tools which can seamlessly integrate at the heart of our conversation data store (GCP, BigQuery) and intertwine multiple labeling sources of various confidence levels. Come at the right time, and you will have an enormous opportunity to make a massive impact on the design, architecture, and implementation of an innovative, mission critical product, used every day, by people you know, and which customers love.As part of the emerging tech group, you will also have the additional opportunity of building demos, proof of concepts, creating white papers, writing blogs, etc.Note that this is not a fully remote job, you are required to come to the office (currently at least 2 days a week).