Inmar Intelligence-posted 11 days ago
Full-time • Mid Level
Chicago, NC
1,001-5,000 employees

As a Senior Data Engineer, you will play a critical role in shaping the architecture, implementation, and reliability of our cloud-native data platforms. You’ll be responsible for designing and developing scalable data solutions that support analytics, automation, and business operations across Inmar. You will work with a modern tech stack—including BigQuery, Airflow, Python, and Pub/Sub—within a Google Cloud Platform (GCP) environment, building pipelines that are secure, cost-effective, and observable. This role is ideal for someone who thrives on solving complex data challenges, mentoring junior engineers, and delivering high-impact, production-grade solutions, while collaborating cross-functionally across teams. PRIMARY ACCOUNTABILITIES: Technical (100%)

  • Design, build, and optimize scalable, secure, and observable data pipelines using tools like BigQuery, Dataflow, Cloud Functions, Pub/Sub, and Airflow (Composer).
  • Translate technical concepts into clear, actionable plans for technical and non-technical stakeholders.
  • Conduct data profiling and source system analysis to design reliable ingestion and transformation processes.
  • Build reusable, automated frameworks that support batch and streaming workflows, while continuously evaluating new tools and patterns to improve performance, reliability, and cost.
  • Ensure data solutions align with enterprise standards, security policies, and compliance requirements (e.g., HIPAA, CCPA).
  • Design and maintain data models that support OLTP, OLAP, and ML use cases.
  • Implement and maintain metadata management, data lineage, and impact analysis practices.
  • Own production issues end-to-end, including root cause analysis (RCA) and resolution with preventative improvements.
  • Collaborate in Agile environments—contributing to sprint planning, backlog refinement, and iterative delivery.
  • Provide mentorship and technical guidance to junior engineers through code reviews, knowledge sharing, and design discussions.
  • B.S. in Computer Science, Information Systems, or related field required, or 8+ years equivalent experience in data engineering.
  • Proven experience designing, building, and optimizing cloud-native, scalable, cost-efficient data pipelines and architectures, particularly on Google Cloud Platform (preferred); familiarity with AWS or Azure is a plus.
  • Advanced skills in Python and SQL, familiarity with dbt, Airflow, Terraform, CI/CD pipelines.
  • Experience with batch and streaming technologies such as Kafka, Spark, Dataflow, and Pub/Sub.
  • Expertise with GCP-native tools such as BigQuery, Cloud Functions, Cloud Workflows, GCS, and Datastream.
  • Skilled at balancing performance, reliability, and cost.
  • Demonstrated ability to implement automated monitoring, alerting, and data quality checks for production-grade data pipelines.
  • Experience with relational and NoSQL databases, such as Postgres, SQL Server and Cassandra.
  • Practical knowledge of data modeling and design for OLTP, OLAP, and real-time systems to support analytics, operations, and ML use cases.
  • Understanding of data governance practices, including lineage, metadata, access controls, and privacy compliance (e.g., HIPAA, CCPA, etc.).
  • Comfortable working in Agile/Scrum environments, contributing to iterative delivery, sprint planning, and technical scoping.
  • Proven ability to work cross-functionally with data analysts, product managers, privacy, security, and business teams to deliver trusted, actionable data products.
  • Strong problem-solving skills with experience in root cause analysis, writing postmortems, and driving incident resolution and continuous improvement in data platforms.
  • Integrity: Builds trust by taking ownership of actions, following through on commitments, and communicating with transparency.
  • Technical Mentorship: Supports the growth of junior engineers by sharing knowledge, providing guidance on design and implementation, and fostering a collaborative, learning-focused environment.
  • Team Collaboration: Works cross-functionally to align data engineering solutions with business and technical goals. Builds strong partnerships across product, data governance, and analytics teams.
  • Adaptability: Embraces change and innovation with a growth mindset. Adjusts approach as priorities and technologies evolve.
  • Innovation: Identifies and drives opportunities to improve tools, processes, or architecture through new technologies and creative problem-solving.
  • Curiosity: Actively seeks to understand new data domains, technologies, and stakeholder needs. Continuously develops personal and team knowledge.
  • Analytical Thinking: Breaks down complex systems and problems into manageable components. Makes decisions based on data and structured reasoning.
  • Problem Solving: Investigates and resolves complex technical issues efficiently and thoughtfully, balancing long-term and short-term solutions.
  • Familiarity with AI/ML workflows, including feature engineering, model training, and real-time inference; experience with Vertex AI or similar platforms is a plus.
  • Exposure to enterprise SaaS and ERP integrations (e.g., Oracle Cloud ERP, Salesforce, Workday, Ironclad, Replicon) and partnerships with external data vendors.
  • Medical, Dental, and Vision insurance
  • Basic and Supplemental Life Insurance options
  • 401(k) retirement plans with company match
  • Health Spending Accounts (HSA/FSA)
  • Flexible time off and 11 paid holidays
  • Family-building benefits, including Maternity, Adoption, and Parental Leave
  • Tuition Reimbursement and certification support, reflecting our commitment to lifelong learning
  • Wellness and Mental Health counseling services
  • Concierge and work/life support resources
  • Adoption Assistance Reimbursement
  • Perks and discount programs
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service