Symbolica AI-posted 6 months ago
Full-time • Entry Level
San Francisco, CA
1-10 employees

At Symbolica, we’re building the next frontier of symbolic reasoning and architecture specification in AI. As our Founding ML Compiler Engineer, you'll lead the development of the compiler stack and GPU kernels behind our in-house, dependently typed language — a new way to specify and run AI architectures that are correct by construction. This role is a chance to help invent the programming language of the next AI paradigm.

  • Translate high-level symbolic architecture specs (written in our custom dependently typed DSL) into efficient compute graphs and GPU-executable code.
  • Build and optimize GPU kernels using CUDA or Rust, targeting training and inference of symbolic AI models.
  • Design and implement compiler infrastructure (e.g. custom IRs, graph lowering, scheduling, memory planning) using MLIR, LLVM, or your own abstractions.
  • Collaborate with mathematicians and researchers to co-design the system from first principles, ensuring semantic correctness throughout.
  • Profile and debug across the stack — from type-level constructs to kernel performance — ensuring mathematical expressiveness meets real-world throughput.
  • Strong experience with Rust or other performant system languages (e.g. C++, Haskell, Julia)
  • Expertise in compilers, intermediate representations, and building static analyses or program transformations
  • Familiarity with dependent types, symbolic computation, or strongly typed DSLs
  • Experience with CUDA, GPU kernels, and performance tuning at the memory/threading level
  • Background in functional programming, category theory, or type theory
  • Competitive salary and early-stage equity package.
  • A high-trust, execution-first culture with minimal bureaucracy.
  • Direct ownership of meaningful projects with real business impact.
  • A rare opportunity to sit at the interface between deep research and real-world productisation.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service