Altera Semiconductor-posted 14 days ago
$143 - $207/Yr
Full-time • Mid Level
San Jose, CA

Join our FPGA software and tools team to help build the next-generation AI Copilot for our design toolchain! You’ll work at the intersection of large language models, agentic workflows, and FPGA design, creating intelligent assistants that make complex hardware design flows faster, easier, and more intuitive for engineers. This role is ideal for someone who has hands-on experience building real LLM applications and is excited to apply that skill to solve real problems in electronic design automation (EDA). Other responsibilities include but are not limited to: Designs, develops, integrates, tests, validates, and/or debugs software to enable Altera product features to enable or utilize Artificial Intelligence, including machine learning and deep learning. Understands internal and external partner software and develops software across the stack (spanning firmware, drivers, OS, middleware, frameworks, algorithms, and applications) as required to enable and optimize specific AI features, capabilities, solutions, reference platforms, or Altera products. May include the development of reference AI software and improving or enabling customer designs to obtain the greatest value of Altera AI products, the development and/or optimization of workloads for AI benchmarks, and workloads for simulation.

  • Designs, develops, integrates, tests, validates, and/or debugs software to enable Altera product features to enable or utilize Artificial Intelligence, including machine learning and deep learning.
  • Understands internal and external partner software and develops software across the stack (spanning firmware, drivers, OS, middleware, frameworks, algorithms, and applications) as required to enable and optimize specific AI features, capabilities, solutions, reference platforms, or Altera products.
  • May include the development of reference AI software and improving or enabling customer designs to obtain the greatest value of Altera AI products, the development and/or optimization of workloads for AI benchmarks, and workloads for simulation.
  • Master’s Degree in Computer Science, Computer Engineering, Electrical Engineering, or related field, or a Bachelor's Degree with 2+ years of experience.
  • Demonstrated, hands-on experience: Building LLM-based applications end-to-end (backend + orchestration).
  • Implementing RAG (vector stores, embeddings, retrieval strategies, evaluation).
  • Creating and curating datasets for LLMs (prompt/response pairs, feedback data).
  • Strong programming skills in one or more of: Python, TypeScript/JavaScript, or similar.
  • Familiarity with modern LLM tooling (e.g., LangChain, Llamaindex, custom orchestration, or in-house frameworks).
  • Understanding of FPGA or ASIC design flows (Design Entry, synthesis, P&R, timing analysis, verification, constraints, debug).
  • Familiarity with using tools like Quartus, Vivado, or similar.
  • Experience integrating AI assistants into developer or engineer workflows (IDEs, CLIs, GUIs, dashboards).
  • Experience working with cloud services, vector databases, or telemetry/logging pipelines.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service