This role is categorized as hybrid. This means the successful candidate is expected to report to Austin, TX or Warren, MI three times per week, at minimum [or other frequency dictated by the business if more than 3 days]. The Role Join a team of builders shaping enterprise-grade data products and platforms that power analytics, customer experiences, and operational insights at scale. The role You will design, build, and operate reliable batch and streaming data pipelines, partnering closely with product, platform, and governance teams to deliver high-quality, secure, and discoverable data. How you’ll work Product mindset: outcome-driven, iterative delivery, and clear metrics. Quality first: automated tests, reproducible pipelines, and continuous improvement. Security and compliance by design: least-privilege access, data masking, and auditability. Collaboration: partner across platforms, governance, and product teams; communicate clearly with technical and non-technical stakeholders. Tools you may use here Languages: Python, SQL Compute and pipelines: Apache Spark, orchestration/workflows (e.g., Databricks Workflows/Airflow), containerized jobs where needed Storage/metadata: Parquet; lakehouse tables (e.g., Delta/Iceberg); catalog/lineage tools DevOps: Git, CI/CD, secrets management, observability (logs/metrics/traces)
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Mid Level