Machine Learning Architecture Systems- PhD Intern

Keysight Technologies, Inc.Calabasas, CA
1d$50 - $54

About The Position

Keysight is at the forefront of technology innovation, delivering breakthroughs and trusted insights in electronic design, simulation, prototyping, test, manufacturing, and optimization. Our ~15,000 employees create world-class solutions in communications, 5G, automotive, energy, quantum, aerospace, defense, and semiconductor markets for customers in over 100 countries. Learn more about what we do. Our award-winning culture embraces a bold vision of where technology can take us and a passion for tackling challenging problems with industry-first solutions. We believe that when people feel a sense of belonging, they can be more creative, innovative, and thrive at all points in their careers. About the Program Keysight’s Applied AI Research group is pioneering the next generation of adaptive engineering intelligence systems, where simulation, measurement, and machine learning converge to create self-evolving predictive models. These systems learn from complex physical data, continuously refine their architectures, and adapt to new design and testing conditions — accelerating innovation across engineering domains. This internship focuses on autonomous neural architecture creation and model expansion, developing frameworks that enable neural networks to grow, adapt, and self-optimize. You will work on expanding Keysight’s model portfolio — including Graph Neural Networks (GCN/GNNs), Graph Neural Operators (GNO), Fourier Neural Operators (FNO), and Transformer architectures — while designing heuristic-driven architecture generation and automatic sizing mechanisms. Your work will contribute to building an intelligent modeling substrate where AI models can construct, evaluate, and improve their own architectures autonomously. As a PhD Intern in Machine Learning Architecture Systems, you will research and develop the foundations for autonomous neural model creation and model scaling. You will implement meta-architectural algorithms — systems that explore and evolve network topologies automatically — leveraging libtorch, C++, and GPU-accelerated computation. Your contributions will enable the development of adaptive architecture frameworks that reason across model design spaces, select optimal configurations, and expand or prune networks dynamically in response to training feedback and performance signals. You’ll collaborate with Keysight’s AI researchers, simulation experts, and runtime engineers to integrate these autonomous architecture systems into the company’s high-performance AI modeling stack. What This Internship Offers The opportunity to define how neural architectures create, expand, and evolve themselves in production-grade AI systems. Mentorship from experts in machine learning, high-performance computing, and AI systems architecture. A chance to advance automated model generation, heuristic-based scaling, and architecture optimization. A portfolio-defining research project at the intersection of neural architecture search, C++/CUDA development, and applied AI autonomy.

Requirements

  • Current PhD student (or recently graduated PhD) in Machine Learning, Computer Science, Applied Mathematics, or Electrical/Mechanical Engineering.
  • Strong proficiency in C++, CUDA, and libtorch (C++ PyTorch API).
  • Deep understanding of neural network architectures, architecture search, and optimization algorithms.
  • Experience designing and training models such as GCNs, GNNs, GNOs, FNOs, or Transformer architectures.
  • Proven ability to implement ML algorithms from first principles in C++, without reliance on Python front-ends.
  • Familiarity with meta-learning, heuristic-driven search, or automated model generation frameworks.
  • Strong analytical, mathematical, and software engineering skills focused on efficiency, reproducibility, and interpretability.
  • Strong foundation in C++ architecture design, template metaprogramming, and HPC performance profiling.
  • Experience with CMake, Bazel, and Git workflows for large-scale C++ projects.
  • Ability to analyze GPU memory efficiency, kernel throughput, and asynchronous data pipelines.
  • Understanding of data serialization, experiment tracking, and model reproducibility in C++/libtorch workflows.
  • Passion for building autonomous, interpretable AI systems that combine mathematical rigor with software craftsmanship.
  • Candidates who wish to be considered must be enrolled in a accredited college/university as of September 2026. Applicants who have graduated before September 2026 will not be considered unless they are entering/applying to a MS or PHD program after graduating.
  • Visa Sponsorship is not available for this position. Candidates who now or at any point in the future require sponsorship for employment visa status (e.g., H-1B Visa status) may not be considered.

Nice To Haves

  • Experience with C++ deep learning toolchains (libtorch, cuDNN, TensorRT, or custom CUDA kernels).
  • Background in graph computation, operator learning, or scientific model compression.
  • Familiarity with reinforcement learning, evolutionary search, or genetic programming for architecture discovery.
  • Knowledge of physics-informed ML, surrogate modeling, or scientific AI systems.
  • Exposure to multi-GPU, distributed training, and HPC orchestration environments.

Responsibilities

  • Design and implement algorithms for autonomous neural architecture creation and model expansion using C++/CUDA and libtorch.
  • Develop heuristic and meta-learning systems for automatic model scaling, parameter sizing, and connectivity optimization.
  • Expand the existing model portfolio by implementing and benchmarking architectures such as GNNs, GCNs, GNOs, FNOs, and Transformer variants.
  • Prototype architecture controllers capable of adjusting model topology, receptive fields, or operator depth based on real-time learning feedback.
  • Build frameworks for autonomous model discovery and adaptation.
  • Benchmark generated architectures for efficiency, accuracy, and generalization across simulation and measurement datasets.
  • Collaborate with domain experts to ensure generated models remain physically consistent and interpretable.
  • Integrate automated model generation mechanisms into the internal ML runtime, ensuring modularity, safety, and reproducibility.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service