About The Position

Our Mission Local Logic is digitizing the built world to make it universally understandable and actionable for consumers, investors, developers, and governments with the ambition of helping build cities that are more sustainable and equitable for the people who live in them. To achieve that dream, we’ve built a digital twin of cities, quantifying the built world using data and AI to interpret the $217T real estate market throughout the US and Canada. We started our journey at McGill University’s urban planning department, where we came to see that cities were being developed in all kinds of unsustainable ways. Why were sprawling suburbs still being built, when doing so would increase pollution and inequality? Why were new business parks being built far from mass transit, when doing so would make traffic congestion skyrocket? Why was social housing being built in places that would exacerbate social problems rather than improve them? It became clear to us why: cities are incredibly difficult to understand. However, we realized that with recent advances in data science, all the complexity of cities could be made simple enough for anyone to understand. And that understanding would be essential to making the sustainable, equitable, and prosperous cities that we so desperately need. Today, Local Logic delivers sophisticated location insights through webtools, APIs, one-click reports and a data analytics platform. Our insights are powered by billions of data points we’ve generated that describe all aspects of cities -- from the distance to the nearest bus stop, the quietness on the street, to the most recent trends in the housing market and more. Your Mission As Local Logic’s Senior Data Platform Engineer, you will help evolve our batch-heavy data platform that powers large-scale geospatial processing and time-driven production pipelines (daily, monthly, and quarterly refresh cycles), used to create our location-based insights & predictions from large and diverse sources of data. Our data platform delivers value to our customers through public APIs serving over 400M monthly calls, customer-facing SDKs, and our team of Data Scientists. The ideal candidate will have a strong background in data engineering, geospatial data at scale, and cloud-based technologies, and is motivated to continuously deepen their expertise as our data platform evolves in scale and complexity. You believe in our mission and want to help us achieve it. You bring your own unique perspective to the team, so you can challenge the way we do things for the better. You’re able to speak up when you disagree, ask questions when you don’t understand, and take ownership of your work.

Requirements

  • Proven ability to design, build, operate, and optimize production-grade batch data pipelines and lakehouse datasets at scale, including data modeling, orchestration, observability, and cost management.
  • Strong software engineering proficiency in Python, including writing modular, testable, and production-ready code.
  • Experience integrating batch data pipelines with production databases, ensuring data integrity, consistency, and efficient write patterns.
  • Production experience operating data systems in cloud-native environments, with an understanding of containerization, infrastructure-as-code, and distributed compute patterns. Experience with AWS and Kubernetes is a plus.
  • Experience working with large-scale geospatial datasets, spatial indexing, or geospatial analytics workflows is highly valued.
  • Experience implementing CI/CD practices for data and application workflows, including automated testing and deployment pipelines.
  • Experience designing and operating production-grade asset-based workflows in Dagster (or similar modern orchestrators).
  • Excellent interpersonal and communication skills.
  • Startup mindset: Ability to embrace change, adapt to shifting priorities and take ownership when required.

Responsibilities

  • Pipeline Development & Operations: Design, build, and operate large-scale, time-driven batch pipelines and lakehouse datasets, totalling about 30TB, powering production APIs and machine learning systems. Ensure reliability, cost-efficiency, reproducibility, and predictable data refresh cycles across daily, monthly, and quarterly workloads.
  • Cross-Functional Enablement: Work closely with data scientists, machine learning specialists and software developers and technical product managers to translate their requirements into data architecture.
  • Governance & Quality: Define and implement best practices for data management, quality, and governance, to ensure data quality, accuracy, and consistency for the team. This includes, but is not limited to, schema versioning, data validation frameworks, monitoring & alerting and data lineage.
  • Technical Leadership & Mentorship: Champion software development best practices and standards for performance, quality assurance, testing, security and coding quality through code reviews, design reviews and by defining architectural standards.

Benefits

  • Comprehensive health insurance on us
  • A health platform (telemedicine, Employee and Family Assistance Program (EFAP), mental health and stress management assistance) (for Canadians only)
  • Stock options
  • Unlimited vacation
  • Intentional Fridays
  • Health and benefit allowance per year
  • Initial WFH allowance
  • Bike sharing membership on us
  • A cool office in the heart of Montreal
  • Your professional development is our priority. With a 1500 CAD annual professional development credit you’re encouraged to keep learning, explore new skills, and advance in your career. We want you to thrive, grow, and feel fulfilled while working on work that matters.
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service