About The Position

OneMagnify is an AI native, platform-enabled B2B digital agency operating at the intersection of data, technology, and creativity. They help complex organizations drive measurable business outcomes by building smarter customer experiences and delivering highly integrated solutions across digital, media, and technology. By combining deep industry expertise with advanced analytics and artificial intelligence, they enable clients to make better decisions, move faster, and compete more effectively in dynamic markets. As a Data Engineer at OneMagnify, you will build and maintain the data pipelines and integrations that power enterprise analytics and client-facing solutions. This is a craft-focused role where the quality of your work has a direct line to the insights clients rely on and to the AI and analytics products built on top of that data. You'll work alongside data scientists, analysts, and architects to make sure data moves reliably, cleanly, and at scale. Good data engineering is invisible when it works and painfully obvious when it doesn't. In this role, you're the reason it works. The pipelines you build connect source systems — ERPs, CRMs, data warehouses — to the analytics and AI layers where client decisions get made. When that infrastructure is solid, the models are more accurate, the reporting is trustworthy, and the business cases hold up. You'll work on engagements with large B2B clients in sectors like automotive, industrial, and enterprise technology — organizations managing complex, high-volume data environments where reliability and data quality aren't optional. Your work directly affects how well those clients can act on their data, whether that's optimizing a supply chain, personalizing a customer experience, or tracking campaign performance across channels. You'll collaborate closely with data scientists, analysts, and data architects — not just handing off pipelines, but actively contributing to how data domains are structured and how quality standards get defined and upheld across the platform.

Requirements

  • Bachelor's degree in Computer Science, Information Systems, or a related field — or equivalent professional experience
  • 5+ years of hands-on experience in data engineering development or implementation
  • Strong SQL skills across data analysis, validation, and troubleshooting
  • Hands-on experience with Databricks (Delta Lake, Unity Catalog, Spark) and AWS data services (Glue, Redshift, S3, Lambda, or Step Functions)
  • Familiarity with APIs and integration methods for connecting systems across an enterprise

Nice To Haves

  • Experience with Databricks MLflow or Feature Store supporting AI/ML pipeline workflows
  • Familiarity with marketing data ecosystems: CRM platforms, CDP architectures, or Martech/Adtech data flows
  • Exposure to data observability or governance tooling (lineage tracking, data cataloging, pipeline monitoring)
  • Experience in a digital agency, marketing services, or consulting environment with multiple concurrent client data environments
  • Working knowledge of streaming data pipelines or event-driven architectures (e.g., Kafka, Kinesis)

Responsibilities

  • Develop integrations between data sources and target systems including ERPs, CRMs, and data warehouses using Databricks and AWS-native services (Glue, Step Functions, Lambda)
  • Configure, customize, and deploy data engineering applications that support multiple data domains reliably and at scale
  • Leverage the Databricks Lakehouse platform — Delta Lake, Unity Catalog, and Spark-based processing — to optimize pipeline performance and maintainability
  • Develop and enforce data cleansing and standardization guidelines that keep data consistent and trustworthy across systems
  • Use strong SQL skills to validate, troubleshoot, and resolve data issues before they surface downstream
  • Partner with data architects to set quality standards that the broader team can operate against
  • Build integrations using APIs and modern pipeline approaches to connect systems that weren't designed to work together
  • Align pipeline design with enterprise data flows in close collaboration with data scientists and analysts
  • Ensure integrations are built for durability, not just initial delivery
  • Work directly with business users and data stewards to diagnose and resolve data issues within the platform
  • Translate technical pipeline behavior into clear explanations for non-engineering stakeholders
  • Contribute to documentation and processes that make the data platform easier to use and maintain over time

Benefits

  • medical coverage
  • dental coverage
  • vision coverage
  • a 401(k) retirement plan
  • paid holidays
  • Flexible Time Off (FTO)
  • additional programs focused on wellness
  • additional programs focused on financial security
  • additional programs focused on professional growth
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service