Senior Data Software Engineer, Personalization

AutodeskToronto, ON
Hybrid

About The Position

Autodesk is seeking a Senior Data Software Engineer for its Personalization team. This role involves designing and owning the data foundations for Autodesk's personalization platform, which uses AI, machine learning, and agentic systems to create intelligent and personalized experiences across Autodesk's products. The position is a hands-on, senior engineering role with significant ownership, working across backend services, data pipelines, and APIs. The engineer will help define schemas, transformations, and architectural patterns, and will engage across the stack to ensure data and intelligence are surfaced correctly in products. The team works closely with product line development teams to democratize ML/Analytics across all Autodesk products.

Requirements

  • BS or MS in Computer Science, Engineering, or a related field.
  • 8 or more years of experience building production-grade software systems.
  • Strong experience designing and building backend services and distributed systems using languages such as Python, Java, or Go.
  • Experience with API design and development, including REST or gRPC-based services.
  • Strong experience designing and operating large-scale data systems and distributed architectures in cloud environments, AWS preferred.
  • Deep expertise in SQL and relational data modeling, including schema design, normalization, and performance optimization at scale.
  • Strong understanding of data modeling concepts for analytical and operational systems, including building durable, reusable datasets.
  • Experience building and operating data pipelines using tools like Airflow, Prefect, or similar.
  • Experience working with cloud data platforms such as Snowflake, Hive, or Redshift.
  • Strong understanding of data quality, testing, lineage, and monitoring in production systems.
  • Ability to design and build scalable systems that serve high-volume data workloads.

Nice To Haves

  • Experience with personalization, recommendation systems, or ML platforms.
  • Experience with real-time or event-driven architectures such as Kafka or Kinesis.
  • Familiarity with LLM-based systems, including building or supporting data pipelines for AI-driven applications.
  • Experience working with or enabling agentic workflows or AI-powered automation.
  • Experience collaborating closely with data science or ML teams.
  • Experience mentoring engineers or leading technical initiatives.

Responsibilities

  • Design and build scalable data pipelines to ingest, process, and serve product usage and behavioral data for personalization and AI use cases.
  • Develop backend services and data APIs using technologies such as Python, Java, or Kotlin, and frameworks like Spring Boot, FastAPI, or similar.
  • Build and operate microservices that expose data and intelligence capabilities to internal and customer-facing applications.
  • Define and evolve data models, schemas, and transformations to ensure high-quality and reliable datasets.
  • Build systems that support AI and agentic workflows, ensuring data is structured and accessible for automated decision-making and intelligent agents.
  • Partner with product managers, data scientists, and analysts to translate business needs into scalable data systems.
  • Ensure data quality, observability, and reliability across pipelines and services.
  • Contribute to architectural decisions and drive best practices in data and backend engineering.
  • Mentor engineers on data modeling, SQL performance, and scalable pipeline design.

Benefits

  • Annual cash bonuses
  • Stock grants
  • Comprehensive benefits package
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service