Data Engineer

TASHouston, TX
10h

About The Position

The Data Engineer is responsible for designing, building, and maintaining the organization’s enterprise data pipelines, curated datasets, and analytics platform foundation. This role enables timely, accurate, and trusted decision-making by ensuring reliable, scalable, and governed data flows across the Enterprise Resource Planning (ERP) system, Product Lifecycle Management (PLM) system, and other integrated business applications. The Data Engineer will engineer ingestion and curation from vendor-hosted enterprise data repositories and integration services, including Infor OS Data Lake and supported extraction methods (e.g., Data Fabric Objects API, Compass SQL API / Compass JDBC Driver, ION Data Lake Flows, and ETL Client for Data Lake). The role will enable enterprise analytics and reporting through Microsoft Fabric and Power BI, and will use Spark/PySpark, SQL, and Python to deliver performant, scalable transformations. This role also supports the organization’s increasing use of AI-enabled analytics by preparing high-quality, well-governed datasets and following company requirements for responsible and compliant AI usage.

Requirements

  • Strong understanding of modern data architectures (data lake/lakehouse/warehouse), data modeling, and ETL/ELT engineering patterns.
  • Proficiency in SQL and Python for data transformation and pipeline development.
  • Hands-on experience with Spark/PySpark for scalable transformations and performance tuning.
  • Experience integrating and extracting data from Infor OS Data Lake using Data Fabric Objects API , Compass SQL API/JDBC , ION Data Lake Flows , ETL Client for Data Lake , and/or Stream Pipelines .
  • Experience developing enterprise analytics assets using Microsoft Fabric and enabling dashboards and self-service reporting using Power BI .
  • Ability to implement data quality controls, reconciliation, documentation, and lineage practices that support trust and auditability.
  • Practical AI competency: ability to prepare AI-ready datasets and responsibly use AI assistants to improve productivity while maintaining accuracy, confidentiality, and compliance.
  • Strong communication skills to translate business needs into technical solutions and collaborate with both technical and non-technical stakeholders.
  • 3+ years of experience in data engineering, analytics engineering, or a related technical role.
  • Demonstrated experience building and maintaining curated datasets used for enterprise reporting and analytics.
  • Demonstrated experience with at least two of the following toolsets: Microsoft Fabric (Lakehouse/Warehouse, pipelines/notebooks) Power BI (dataset modeling and dashboard enablement) Infor OS Data Lake extraction/integration methods (Objects API, Compass SQL API/JDBC, ION Data Lake Flows, ETL Client, Stream Pipelines) Spark/PySpark
  • Experience implementing monitoring/alerting or operational runbooks for production data pipelines.

Nice To Haves

  • Preferred training/certifications in one or more areas: cloud data platforms, data engineering, analytics engineering, data governance, or AI fundamentals for data/analytics professionals.
  • Familiarity with manufacturing/operations reporting domains (production, inventory, procurement, costing, project execution) preferred.

Responsibilities

  • Data Engineering & Architecture (Microsoft Fabric / Lakehouse) Design and maintain a modern enterprise analytics foundation using Microsoft Fabric (Lakehouse/Warehouse patterns) to support governed reporting and self-service analytics.
  • Build and manage curated data layers aligned to medallion-style processing (raw → standardized → curated) using Spark/PySpark , SQL , and Python .
  • Develop and maintain enterprise data models optimized for analytics performance, consistent KPI definitions, and reuse across business domains.
  • Data Integration & Ingestion (Infor OS Data Lake + APIs + Flows) Develop and support automated ingestion from Infor OS Data Lake using supported extraction/integration methods such as: Data Fabric Objects API (object/file extraction) Compass SQL API / Compass JDBC Driver (query-based extraction) ION Data Lake Flows (scheduled push to connection points) ETL Client for Data Lake (scheduled transfer patterns) Stream Pipelines where applicable for continuous/near real-time delivery Implement incremental loading patterns, orchestration, monitoring, alerting, and failure recovery to ensure reliable delivery of daily/near real-time datasets.
  • Partner with application and integration teams to align ingestion with upstream interfaces, data contracts, and security requirements.
  • Reporting & Analytics Enablement (Power BI + Dataset Modeling) Provide trusted, well-documented datasets that enable enterprise dashboards and self-service analytics in Power BI .
  • Build and maintain business-friendly semantic/dimensional models that support high-performance dashboards and consistent KPI definitions.
  • Support modernization and migration of reporting assets into Microsoft Fabric , ensuring datasets and models align to reporting needs and enterprise metric definitions.
  • Data Quality, Governance & Master Data Support Implement validation, reconciliation, and anomaly detection to ensure accuracy and completeness of curated datasets.
  • Establish automated checks for common data issues (duplicates, missing attributes, invalid statuses, inconsistent units of measure).
  • Partner with master data stakeholders and business data stewards to define standards, drive adoption, and remediate root-cause issues impacting data quality.
  • AI-Enabled Analytics Support (Practical AI Competency) Prepare AI-ready datasets by ensuring data completeness, consistency, lineage, and documentation (e.g., feature-ready curated tables, standardized definitions, and auditability).
  • Support AI-assisted development workflows (e.g., using copilots/assistants to accelerate transformation code generation, documentation, and standardization) while adhering to company AI requirements for accuracy, confidentiality, compliance, and labeling of AI-generated content where required.
  • Operational Excellence (Reliability, Supportability, Documentation) Develop and maintain runbooks, operational documentation, lineage notes, and standardized naming conventions for pipelines, datasets, and reporting layers.
  • Track pipeline health metrics (freshness, completeness, latency, failure rates) and continuously improve reliability and performance.
  • Provide knowledge transfer and training to analysts and IT Business Applications team members to improve overall data fluency.
  • All other duties as assigned by TAS.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service