Senior Advanced AI Platform Engineer

HoneywellAtlanta, GA
1dHybrid

About The Position

As a Senior Advanced AI Platform Engineer here at Honeywell, you will lead the design and development of AI algorithms, models, and systems. You will act as the subject matter expert for AI projects and drive technical excellence. You will report directly to our Senior Director of Engineering, and you’ll work out of our Atlanta, GA location on a Hybrid work schedule. We are seeking a Senior Advanced AI Platform Engineer to join our Data Engineering, AI & ML Platform team. This role is central to designing, building, and scaling the enterprise AI/ML platform that powers intelligent building automation across a global portfolio. You will work at the intersection of data engineering, machine learning operations, and edge AI — building production-grade infrastructure that processes billions of IoT events from building management systems, deploys models to edge devices, and enables AI-driven applications including predictive diagnostics, energy monitoring, and RAG-based knowledge systems. This is a high-impact individual contributor role for someone who thrives in ambiguity, ships production systems, and can operate across the full stack from cloud-native platforms to edge GPU hardware.

Requirements

  • Bachelor’s degree from an accredited institution in a technical discipline such as science, technology, engineering, mathematics.
  • 8 plus years of experience in software engineering, data engineering, or ML platform engineering.
  • Strong proficiency in Python and at least one systems language (Go, Rust, or C Plus Plus).
  • Deep hands-on experience with cloud-native data platforms (Databricks, BigQuery, Azure Data Lake, Kubernetes).
  • Production experience building and deploying ML/AI pipelines including model serving, feature engineering, and experiment tracking.
  • Experience with LLM application frameworks such as LangChain, LangGraph, or equivalent agentic AI orchestration tools.
  • Experience with edge AI deployment on NVIDIA Jetson or similar embedded GPU platforms.
  • Familiarity with building automation protocols (BACnet, Modbus) and IoT time-series data at scale.
  • Experience with knowledge graphs, ontology engineering, or semantic web technologies.
  • Due to compliance with U.S. export control laws and regulations, candidate must be a U.S. Person, which is defined as, a U.S. citizen, a U.S. permanent resident, or have protected status in the U.S. under asylum or refugee status or have the ability to obtain an export authorization.

Nice To Haves

  • Advanced degree in Computer Science, Artificial Intelligence, or related field.
  • Background in building management systems, HVAC, energy management, or industrial IoT domains.
  • Strong leadership and management skills.
  • Experience working in an agile development environment.
  • Proven ability to drive successful cloud development projects and initiatives.
  • Ability to work in a fast-paced and dynamic environment.
  • Attention to detail and excellent problem-solving capability.

Responsibilities

  • AI/ML Platform Engineering Design, build, and maintain enterprise AI/ML platform services on multi-cloud infrastructure including model training, serving, experiment tracking, and feature store components.
  • Build and optimize data pipelines processing billions of time-series IoT events using Databricks, Spark, and streaming frameworks.
  • Implement ML orchestration workflows using LangGraph, MLflow, and custom orchestration layers for multi-agent AI systems.
  • Develop and maintain CI/CD automation for model deployment across cloud and edge environments.
  • Architect and deploy AI inference services on edge GPU hardware for real-time building automation use cases.
  • Optimize model performance for edge constraints including quantization, model distillation, and inference acceleration.
  • Build containerized inference microservices that operate reliably in disconnected or low-bandwidth environments.
  • Work with semantic object models and building automation ontologies to map BMS point names to standardized equipment taxonomies.
  • Build and integrate knowledge graph pipelines supporting equipment classification, fault diagnostics, and energy optimization.
  • Develop RAG-based retrieval systems for product documentation and maintenance knowledge bases.
  • Own platform reliability for AI services serving multiple business units.
  • Implement observability, monitoring, and alerting for ML pipelines and inference services.
  • Drive cost optimization across data platform workloads, cloud compute, and storage infrastructure.

Benefits

  • In addition to a competitive salary, leading-edge work, and developing solutions side-by-side with dedicated experts in their fields, Honeywell employees are eligible for a comprehensive benefits package.
  • This package includes employer subsidized Medical, Dental, Vision, and Life Insurance; Short-Term and Long-Term Disability; 401(k) match, Flexible Spending Accounts, Health Savings Accounts, EAP, and Educational Assistance; Parental Leave, Paid Time Off (for vacation, personal business, sick time, and parental leave), and 12 Paid Holidays.
  • For more information visit: Benefits at Honeywell
  • The application period for the job is estimated to be 40 days from the job posting date; however, this may be shortened or extended depending on business needs and the availability of qualified candidates.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service