About The Position

The SaaS Engineering organization within the Cisco Networking Group is committed to revolutionizing the deployment of generative AI applications. Our mission is to improve IT visibility and deliver comprehensive analytics for the AI infrastructure stack. By using ground-breaking technologies, we are developing next-generation SaaS-based controllers designed to efficiently manage Data Centers with AI/ML solutions like Nexus Hyperfabric. Your Impact: You will shape the Hyperfabric data strategy and execute it. You will build and maintain in-product dashboards, APIs, and 3rd party system integrations, all while handling high-volume streaming data such as device telemetry, system alerts, etc. You are committed to designing and building robust and scalable systems that are easy to understand and maintain. Your contributions will directly impact how our customers interact with Hyperfabric by giving them visibility into their data centers’ performance, health, and more. You will innovate at the intersection of data + AI. You will define and deploy our strategy to collect and stage our telemetry data. Leveraging this, you will build the AI workloads that transform this data into meaningful insights that we surface to our customers, including error detection, troubleshooting, performance tuning suggestions, etc.

Requirements

  • Bachelor’s + 12 years of experience or Master’s + 8 years of experience in software engineering or related field, with a focus on building cloud systems, including expertise in database, data lake and streaming systems.
  • Experience building scalable data pipelines using Spark, Flink, or similar technology.
  • Experience programming with two or more languages such as Go, Python, Bash, Java, C, C++.
  • Experience with AI-based code tools such as Codex, Claude Code, etc

Nice To Haves

  • Experience deploying or using Data Lakes/Warehouses, including storage, table format, and processing.
  • Experience building and deploying AI-backed functionality, either customer-facing features or internal productivity tools.
  • Expertise with Druid or another time-series analytics database.
  • Expertise with Kafka or another streaming system.

Responsibilities

  • Design and build forward-looking, performant, well-tested software with a specific focus on Go and Python.
  • As a backend data engineer, you will focus on supporting both internal and external use cases.
  • Design and build data pipelines that support Hyperfabric’s data APIs and notification streams, leveraging Druid, Kafka and more.
  • Benchmark and tune these pipelines to support our workloads at high throughput and low latency and with fault tolerance.
  • Deploy new functionality into production with rollout/rollback planning as well as monitoring and alerting.
  • Collaborate with product managers, other software engineers, and QA through all stages of the software development lifecycle.
  • Contribute to the team through technical design reviews, code reviews, and mentoring.

Benefits

  • medical, dental and vision insurance
  • a 401(k) plan with a Cisco matching contribution
  • paid parental leave
  • short and long-term disability coverage
  • basic life insurance
  • grants of Cisco restricted stock units
  • 10 paid holidays per full calendar year
  • plus 1 floating holiday for non-exempt employees
  • 1 paid day off for employee’s birthday
  • paid year-end holiday shutdown
  • 4 paid days off for personal wellness
  • 16 days of paid vacation time per full calendar year, accrued at rate of 4.92 hours per pay period for full-time employees (non-exempt)
  • flexible vacation time off program (exempt)
  • 80 hours of sick time off provided on hire date and each January 1st thereafter
  • up to 80 hours of unused sick time carried forward from one calendar year to the next
  • Additional paid time away may be requested to deal with critical or emergency issues for family members
  • Optional 10 paid days per full calendar year to volunteer
  • annual bonuses
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service