Lead Data Architect - Onsite in Jackson, MI

OneMagnifyDetroit, MI
Onsite

About The Position

OneMagnify is an AI native, platform-enabled B2B digital agency operating at the intersection of data, technology, and creativity. We help complex organizations drive measurable business outcomes by building smarter customer experiences and delivering highly integrated solutions across digital, media, and technology. By combining deep industry expertise with advanced analytics and artificial intelligence, we enable our clients to make better decisions, move faster, and compete more effectively in dynamic markets. As a Lead Data Architect at OneMagnify, you'll design and own the data infrastructure that makes complex analytics and AI-powered client solutions possible. This is a senior hands-on technical role at the center of our data practice — setting architectural direction, shaping how data flows and integrates across enterprise systems, and ensuring the standards others build against. The work you do here directly affects the quality of insights our clients rely on to make faster, smarter business decisions. The Impact You'll Have Data architecture at OneMagnify isn't a back-office function — it's the foundation every analytics, AI, and customer experience solution is built on. When the infrastructure is sound, everything downstream works better: performance models are more accurate, personalization engines fire correctly, and clients get reporting they can trust. You'll work with large B2B organizations — often in automotive, industrial, or enterprise technology — that are trying to connect disparate systems, improve data reliability, or modernize pipelines for cloud-native analytics. You'll partner closely with analytics, engineering, and strategy teams to make sure the architecture you design holds up in production and scales with client needs. You'll also mentor junior data engineers and analysts, raising the technical bar across the team.

Requirements

  • Bachelor's degree in Computer Science, Information Systems, or a related field, or equivalent professional experience
  • 8+ years of hands-on experience in data architecture, with a track record of leading complex, enterprise-scale implementations
  • Demonstrated experience making architectural decisions that have shaped how teams build and operate data systems
  • Strong SQL skills across data analysis, validation, and troubleshooting
  • Deep hands-on experience with Databricks (Delta Lake, Unity Catalog, Spark) and AWS data services (Glue, Redshift, S3, Lambda, or Step Functions)
  • Experience leading or mentoring data engineering teams and setting technical standards
  • Ability to communicate architectural decisions clearly to both technical teams and non-technical stakeholders

Nice To Haves

  • Deep experience with AWS cloud data infrastructure (Redshift, S3, Glue, EMR, or similar) in a production environment alongside Databricks
  • Familiarity with Databricks MLflow or Feature Store as a bridge between data engineering and AI/ML workflows
  • Familiarity with marketing data ecosystems: CRM platforms, CDP architectures, or Martech/Adtech data flows
  • Experience in a digital agency, marketing services, or consulting environment navigating multiple client data environments
  • Working knowledge of data governance or observability frameworks (lineage, cataloging, or pipeline monitoring)

Responsibilities

  • Design Enterprise Data Architecture
  • Build scalable architecture frameworks that support enterprise analytics and operational needs
  • Define data models, storage strategies, and integration patterns that translate business requirements into durable technical solutions
  • Ensure designs are built for performance, reliability, and long-term maintainability
  • Establish and Enforce Data Quality Standards
  • Develop data quality protocols that ensure consistency and reliability across systems and sources
  • Use strong SQL fundamentals to build validation processes that catch issues before they reach end users
  • Build and Optimize Data Pipelines
  • Lead integration work using APIs and modern pipeline approaches to connect systems that weren't designed to work together
  • Optimize ETL/ELT workflows using Databricks and AWS-native services (Glue, Step Functions, Lambda) to build reliable, scalable pipelines
  • Leverage the Databricks Lakehouse platform — Delta Lake, Unity Catalog, and Spark — to improve pipeline efficiency and reduce operational overhead
  • Collaborate Across Engineering, Analytics, and Strategy
  • Align architecture decisions with project goals and client outcomes alongside engineering and analytics teams
  • Translate technical concepts clearly for non-technical stakeholders across strategy and delivery
  • Develop Junior Data Talent
  • Provide mentorship to junior data engineers and analysts
  • Build team-wide fluency in architecture best practices, pipeline patterns, and data quality thinking

Benefits

  • medical, dental, and vision coverage
  • a 401(k) retirement plan
  • paid holidays
  • Flexible Time Off (FTO)
  • additional programs focused on wellness, financial security, and professional growth
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service