Principal Software Engineer, Managed AI

CrusoeSunnyvale, CA
82d$256,000 - $320,000

About The Position

As a Principal Software Engineer on the Managed AI team at Crusoe, you'll have a pivotal role in shaping the architecture and scalability of our next-generation AI inference platform. You will lead the design and implementation of core systems for our AI services, including resilient fault-tolerant queues, model catalogs, and scheduling mechanisms optimized for cost and performance. This role gives you the opportunity to build and scale infrastructure capable of handling millions of API requests per second across thousands of customers. From day one, you'll own critical subsystems for managed AI inference, helping to serve large language models (LLMs) to a global audience. As part of a dynamic, fast-growing team, you’ll collaborate cross-functionally, influence the long-term vision of the platform, and contribute to cutting-edge AI technologies. This is a unique opportunity to build a high-performance AI product that will be central to Crusoe's business growth.

Requirements

  • Advanced degree in Computer Science, Engineering, or a related field.
  • Demonstrable experience in distributed systems design and implementation.
  • Proven track record of delivering early-stage projects under tight deadlines.
  • Expertise in using cloud-based services, such as elastic compute, object storage, virtual private networks, managed database, etc.
  • Experience in Generative AI (Large Language Models, Multimodal).
  • Familiarity with AI infrastructure, including training, inference, and ETL pipelines.
  • Experience with container runtimes (e.g., Kubernetes) and microservices architectures.
  • Experience using REST APIs and common communication protocols, such as gRPC.
  • Demonstrated experience in the software development cycle and familiarity with CI/CD tools.

Nice To Haves

  • Proficiency in Golang or Python for large-scale, production-level services.
  • Contributions to open-source AI projects such as VLLM or similar frameworks.
  • Performance optimizations on GPU systems and inference frameworks.

Responsibilities

  • Lead the design and implementation of core AI services, including resilient fault-tolerant queues for efficient task distribution.
  • Manage model catalogs for managing and versioning AI models.
  • Develop scheduling mechanisms optimized for cost and performance.
  • Create high-performance APIs for serving AI models to customers.
  • Build and scale infrastructure to handle millions of API requests per second.
  • Optimize AI inference performance on GPU-based systems.
  • Implement robust monitoring and alerting to ensure system health and availability.
  • Collaborate closely with product management, business strategy, and other engineering teams.
  • Influence the long-term vision and architectural decisions of the AI platform.
  • Contribute to open-source AI frameworks and participate in the AI community.
  • Prototype and iterate on new features and technologies.

Benefits

  • Industry competitive pay
  • Restricted Stock Units in a fast growing, well-funded technology company
  • Health insurance package options that include HDHP and PPO, vision, and dental for you and your dependents
  • Employer contributions to HSA accounts
  • Paid Parental Leave
  • Paid life insurance, short-term and long-term disability
  • Teladoc
  • 401(k) with a 100% match up to 4% of salary
  • Generous paid time off and holiday schedule
  • Cell phone reimbursement
  • Tuition reimbursement
  • Subscription to the Calm app
  • MetLife Legal
  • Company paid Commuter FSA benefit of $200 per month
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service