Data Engineer II

Honeywell InternationalCharlotte, NC
4dHybrid

About The Position

Commercial Data & Analytics (CoDA) is a cross‑SBG capability that builds shared analytics platforms, centralized BI, and predictive solutions to drive commercial excellence (pipeline health, price optimization, order forecasting, etc.). You'll join a hands‑on, collaborative group of data engineers, data scientists, and analysts operating modern Azure and Snowflake stacks in support of high‑impact enterprise initiatives. As a Data Engineer‑II, you will design, build, and operate secure, scalable data pipelines and models that power analytics and ML across CoDA. You will partner closely with product owners, data scientists, and platform teams to deliver reliable data to Tableau/CRMA, Snowflake, Databricks and downstream applications. You will report directly to our Sr. Director, Data Program Management and you'll work out of our Charlotte, NC location on a Hybrid work schedule. Hybrid Work Schedule Note: For the first 90 days, New Hires must be prepared to work 100% onsite M-F In this role, you will impact the organization by enabling efficient data solutions that drive business value, ensuring that data pipelines are scalable, reliable, and secure.

Requirements

  • Min 4 years of experience in data engineering, ETL, or database development/administration.
  • Hands‑on Azure Databricks, CI/CD & DevOps, and Snowflake experience.
  • Strong Python, SQL, PySpark; comfort with both structured and unstructured data.
  • Experience with Agile delivery.

Nice To Haves

  • Bachelor's degree in a technical discipline such as science, technology, engineering, mathematics.
  • Experience with at least one NoSQL store (e.g., HBase/Cassandra/MongoDB).
  • Familiarity with Hadoop ecosystem (HDFS, Spark), and data integration/ETL tools.
  • Exposure to ML ops tooling (MLflow), AKS‑backed API services, and integration patterns between Databricks, Snowflake, and application layers.
  • Demonstrated contributions to data quality/stewardship initiatives (lineage, metadata, GDM frameworks).
  • Clear communication and ability to present technical trade‑offs to stakeholders.
  • Working knowledge of SFDC data model and commercial processes (opportunities, quotes, quote line items).

Responsibilities

  • Design & build pipelines to ingest, transform, and publish structured/unstructured data from SFDC, EDW, ADLS, Event Hub, and APIs into Databricks/Snowflake, following Delta Lake and Unity Catalog standards.
  • Model data (star/snowflake, CDC, SCD, dimensional views) to support analytics (e.g., commercial pipeline metrics, quote/discount modeling).
  • Operationalize ML/analytics pipelines including bronze→silver→gold processing, joins with model/market indicators, and serving outputs to applications/APIs.
  • Harden platforms: CI/CD with Azure DevOps; monitor jobs/clusters; optimize PySpark/SQL performance; enforce data governance (quality, privacy, lineage, access).
  • Partner & document: collaborate with product owners and data science; write runbooks and technical specs; contribute to weekly updates and stewardship forums.

Benefits

  • In addition to a competitive salary, leading-edge work, and developing solutions side-by-side with dedicated experts in their fields, Honeywell employees are eligible for a comprehensive benefits package.
  • This package includes employer-subsidized Medical, Dental, Vision, and Life Insurance; Short-Term and Long-Term Disability; 401(k) match, Flexible Spending Accounts, Health Savings Accounts, EAP, and Educational Assistance; Parental Leave, Paid Time Off (for vacation, personal business, sick time, and parental leave), and 12 Paid Holidays.
  • For more information visit: click here (https://benefits.honeywell.com/)
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service