GSK-posted 2 months ago
Seattle, WA
5,001-10,000 employees

The Onyx Research Data Tech organization is GSK’s Research data ecosystem which has the capability to bring together, analyze, and power the exploration of data at scale. We partner with scientists across GSK to define and understand their challenges and develop tailored solutions that meet their needs. The goal is to ensure scientists have the right data and insights when they need it to give them a better starting point for and accelerate medical discovery. Ultimately, this helps us get ahead of disease in more predictive and powerful ways. Onyx is a full-stack shop consisting of product and portfolio leadership, data engineering, infrastructure and DevOps, data / metadata / knowledge platforms, and AI/ML and analysis platforms, all geared toward:​ Building a next-generation, metadata- and automation-driven data experience for GSK’s scientists, engineers, and decision-makers, increasing productivity and reducing time spent on “data mechanics”. Providing best-in-class AI/ML and data analysis environments to accelerate our predictive capabilities and attract top-tier talent. Aggressively engineering our data at scale, as one unified asset, to unlock the value of our unique collection of data and predictions in real-time. At GSK we see a world in which advanced applications of AI will allow us to develop transformational medicines using the power of genetics, functional genomics, and machine learning. AI will also play a role in how we diagnose and use medicines to enable everyone to do more, feel better, and live longer. It is an ambitious vision that will require the development of products at the cutting edge of AI and Machine Learning. We're looking for a highly skilled Senior AIML Optimization Engineer to help us make this vision a reality.

  • Serve as a key engineer for the optimization team and contribute technical expertise to teams in closely aligned technical areas such as DevOps, Cloud and Infrastructure
  • Lead design of major optimization software components of the Compute and AIML Platforms, contribute to development of production code and participate in both design reviews and PR reviews
  • Accountable for delivery of scalable solutions to the Compute and AIML Platforms that supports the entire application lifecycle (interactive development and explorations/analysis, scalable batch processing, application deployment) with particular focus on performance at scale
  • Partner with both AIML and Compute platform teams as well as scientific users to help optimize and scale scientific workflows by utilizing deep understanding of both software as well as underlying infrastructure (networking, storage, GPU architectures, …)
  • Participate or leads scrum team and contribute technical expertise to teams in closely aligned technical areas
  • Able to design innovative strategy and way of working to create a better environment for the end users, and able to construct a coordinated, stepwise plan to bring others along with the change curve
  • Standard bearer for proper ways of working and engineering discipline, including CI/CD best practices and proactively spearhead improvement within their engineering area
  • Bachelor’s, Master’s or PhD degree in Computer Science, Software Engineering, or related discipline.
  • 6+ years of experience with Bachelor's, 4+ Years of experience with Masters and 2+ years of experience with PhD using specialized knowledge in cloud computing, scalable parallel computing paradigms, software engineering, and CI/CD.
  • 2 + years of experience in AIML engineering, including large-scale model training and production deployment.
  • Deep experience using at least one interpreted and one compiled common industry programming language: e.g., Python, C/C++, Scala, Java, including toolchains for documentation, testing, and operations / observability
  • Deep experience with application performance tuning and optimization, including in parallel and distributed computing paradigms and communication libraries such as MPI, OpenMP, Gloo, including deep understanding of the underlying systems (hardware, networks, storage) and their impact on application performance
  • Deep expertise in modern software development tools / ways of working (e.g. git/GitHub, DevOps tools, metrics / monitoring)
  • Deep cloud expertise (e.g., AWS, Google Cloud, Azure), including infrastructure-as-code tools (Terraform, Ansible, Packer, …) and scalable cloud compute technologies, such as Google Batch and Vertex AI
  • Expert understanding of AIML training optimization, including distributed multi-node training best practices and associated tools and libraries as well as hands-on practical experience in accelerating training jobs
  • Understanding of ML model deployment strategies, including agent systems as well as scalable LLM model inference systems deployed in multi-GPU, multi-node environments
  • Experience with CI/CD implementations using git and a common CI/CD stack (e.g., Azure DevOps, CloudBuild, Jenkins, CircleCI, GitLab)
  • Experience with Docker, Kubernetes, and the larger CNCF ecosystem including experience with application deployment tools such as Helm
  • Experience with low level application builds tools (make, CMake) and understanding of optimization at the build and compile level
  • Demonstrated excellence with agile software development environments using tools like Jira and Confluence
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service