Senior Engineer - Software Engineering

Northwestern MutualNew York, NY
1dHybrid

About The Position

We’re seeking a Senior Data Analytical Engineer to collaborate with our Field Data business-building insightful, interactive reports that inform data-driven decisions and strengthen field performance. You’ll partner with business and product teams, own end-to-end datasets, lead reporting projects, and mentor offshore and cross-functional engineers. Technical Overview We operate a high-volume analytics platform (ingesting ~300 GB/day; 3–4 TB persisted) built on Databricks (AWS) + Unity Catalog, Spark, Airflow and Control-M. Reporting is primarily Power BI (self-serve + embedded) and SSRS (paginated). Reports are embedded in a React wrapper running on Kubernetes. This role delivers robust, scalable data products today and evolves them with ML, streaming, and cloud-native best practices. What You’ll Do: Demonstrates technical leadership to team. Establishes, aggregates, and shares team standards and best practices within department. Utilizes working relationships across teams within their division. Assesses and provides solutions to system-wide architectural problems.

Requirements

  • Bachelor's Degree or equivalent experience.
  • 7+ years’ experience building production analytics/reporting solutions, with strong Databricks, Power BI background
  • Proven ability to lead projects, mentor distributed teams, and deliver end-to-end analytics solutions leveraging the technical skills listed above
  • Skilled at translating business needs into high-performance, accessible technical solutions
  • Databricks & Spark (production Spark development, partitioning, incremental loads) and Delta Lake
  • Advanced SQL / Spark SQL and Python for ETL, orchestration, and analytics
  • Power BI (data modeling, DAX, Power Query/M), Power BI Embedded, and SSRS/paginated reports
  • Real-time fundamentals (Structured Streaming, Kafka/Kinesis or equivalent) and optimizing batch/stream loads
  • AWS (S3, IAM) with Databricks on AWS; Unity Catalog experience
  • Git + CI/CD pipelines, Terraform (or equivalent IaC), and observability tooling
  • Strong data governance/security (RLS, IAM, encryption)
  • Agile/JIRA experience and strong stakeholder communication

Nice To Haves

  • ML & MLOps: model development/deployment, feature engineering and integrating ML/LLM outputs into reports
  • Front-end & embedding: React + TypeScript, Power BI Embedded, Kubernetes, custom visuals (TypeScript/D3)
  • Big data/cloud scale: Lakehouse optimization and Databricks performance tuning; exposure to Snowflake/Redshift/Synapse
  • DevOps & governance at scale: automated testing for pipelines/reports, data lineage/catalog tools (Purview/Glue), cost-optimized cloud design

Responsibilities

  • Lead design, development, and support of end-to-end analytics: ingest → transform semantic datasets → reporting.
  • Build and optimize batch and streaming ETL pipelines (Databricks/Spark, Delta Lake) and produce performant Power BI datasets and paginated reports.
  • Ensure data governance and security: enforce Unity Catalog policies, RLS, encryption, access controls, and leverage catalog/metadata tools effectively.
  • Coordinate and mentor offshore and cross-team resources; gather requirements, prioritize work, and demo deliverables to stakeholders.
  • Maintain platform reliability: monitoring, observability, automated testing (unit/regression/E2E), data quality checks, and incident response.
  • Implement CI/CD for notebooks, SQL, models, and reports; automate deployments and APIs for data consumption.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service