Texas Instruments-posted 13 days ago
Full-time
Dallas, TX
5,001-10,000 employees

Change the world. Love your job. In your first year with TI you’ll join the Career Accelerator Program (CAP) – a fast‑track development experience that blends professional‑skill workshops, technical training and on-the-job learning so you can start delivering real‑world impact from day 1. About the job: Design, build and verify scalable digital accelerators capable of the performant acceleration of a wide range of key neural network (MLPs, RNNs, CNNs, GNNs, transformers) layers. Create associated CPUs to efficiently cover less common neural network operations. Work with business unit partners to integrate the associated IP into a variety of SoCs across multiple process technologies.

  • Design, build and verify scalable digital accelerators capable of the performant acceleration of a wide range of key neural network (MLPs, RNNs, CNNs, GNNs, transformers) layers.
  • Create associated CPUs to efficiently cover less common neural network operations.
  • Work with business unit partners to integrate the associated IP into a variety of SoCs across multiple process technologies.
  • Master's degree and / or PhD in Electrical Engineering, Computer Engineering, or related technical field of study
  • Cumulative 3.0 / 4.0 GPA or higher
  • Proficient in Verilog for RTL development
  • Proficient in C / C++ for modeling
  • Experience with EDA tools (lint, synthesis, place and route)
  • Strong technical leadership, communication and interpersonal skills
  • High performance throughput optimized accelerator design
  • Development of novel neural network related accelerators and associated publication in top technical conferences
  • High performance memory sub system design
  • Data streaming and data movement accelerator (DMA) design
  • Matrix multiplier design supporting a variety of types (int8, fp8, bfloat16, float32)
  • The ability to understand and make power, performance and area (PPA) tradeoffs
  • High performance latency optimized CPU design
  • RISC-V ISA familiarity
  • Cache, pre fetcher and branch prediction design
  • Fetch, decode, dispatch and commit design for single and super scalar architectures
  • Arithmetic and logic unit (ALU) design
  • Understanding of neural network based architectures (MLPs, RNNs, CNNs, GNNs, transformers)
  • Python programming and the PyTorch package
  • Dense linear algebra, probability, and calculus
  • The ability to dream what could be and the drive to make the dream a reality
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service