Data Engineer

Cushman & WakefieldAtlanta, GA
Onsite

About The Position

Supports the development, optimization, and maintenance of Cushman & Wakefield’s commercial real estate (CRE) forecasting infrastructure across the Americas. This role is focused on engineering robust data pipelines, automating model workflows, and ensuring the integrity and scalability of forecasting systems. Operate as a self-sufficient data practitioner, capable of independently delivering data solutions or working side-by-side with technology teams to ensure alignment and production readiness of QIG capabilities on an iterative basis. Works closely with senior economists, analytics leads, and technical teams to deliver high-quality, production-ready data solutions that underpin the firm’s House View and related analytical products.

Requirements

  • Bachelor’s or Master’s degree in Data Engineering, Data Science, Computer Science, Statistics, or a related technical field. Advanced degree a plus.
  • 5-7 years of experience in data engineering or a hybrid analytical/engineering role, preferably in a forecasting or analytics/production environment. Real estate experience a plus.
  • Strong proficiency in Python/R, SQL, Databricks, Delta Lake and data pipeline frameworks (e.g., medallion architecture).
  • Experience with time series data, econometric / data science modeling workflows, and automation tools.
  • Familiarity with cloud platforms (e.g., Azure, AWS) and version control systems.
  • Demonstrated ability to operate in a collaborative, cross-functional environment, contributing both independently and alongside engineering and analytical teams to deliver data solutions.
  • Comfort working in iterative development settings, balancing hands-on execution with stakeholder collaboration and continuous feedback.
  • Strong attention to detail and commitment to data quality.
  • Excellent documentation, communication, and stakeholder management skills; comfortable operating as the technical translator between analytical domain experts and data engineering teams (when appropriate).
  • Excellent documentation and communication skills for technical audiences. Ability to participate meaningfully in engineering discussions.
  • Exposure to geospatial data concepts and CRE or macroeconomic datasets.
  • Experience working with agile/scrum delivery models in a data and analytics context.

Responsibilities

  • Prototype, build and maintain automated data pipelines for ingesting, transforming, and storing CRE and macroeconomic datasets used in forecasting models.
  • Ensure data integrity and consistency across all QIG’s inputs and outputs through rigorous validation and quality control procedures. Design and enforce structured data interfaces and integration patterns to ensure consistent ingestion and interoperability across internal and external data sources.
  • Work closely with cross-functional partners to define, refine, and validate data quality rules, using both automated checks and hands-on analysis to ensure outputs meet analytical expectations.
  • Perform exploratory data analysis and profiling on raw and processed datasets to validate pipeline outputs and identify anomalies or inconsistencies.
  • Partner with PRI (Property Research & Intelligence), TDS (Technology Data Solutions), GIS (Geographic Information System) and forecasting team to ensure governance of time series data, as revisions to geography-based competitive sets can occur.
  • Collaborate with PRI, TDS/GIS and other QIG teams to integrate internal and external data sources into infrastructure deployed by QIG teams.
  • Ensure Global Think Tank, Americas Research and other stakeholders have access to relevant time series (and forecast) data via various tools and capabilities in coordination with QIG leads. Work iteratively with partners to refine data outputs, validate usability, and adjust underlying pipelines or transformations as needed to meet evolving analytical requirements.
  • Create and maintain documentation of any synthetic data model architecture, data flows, and diagnostic procedures. Have strong grasp of field-level data lineage and traceability to support transparency, reproducibility, and downstream analytical confidence.
  • Partner with Head of Data Science & Geospatial Analytics to build state-of-the-art, novel real estate dataset, with additional relevant data geospatially integrated (e.g., demographics, socioeconomic data, zoning or flood maps, climate or walk score information); produce detailed specifications that guide engineering implementation.
  • Develop internal documentation and process automation, and serve as expert on the integration, application and processing of internal data, 3rd party vendor data and other public data (e.g., Census TIGER, IPUMS) as appropriate with QIG leads.
  • Advise, integrate and execute normalization methods with internal and external partners, co-developing approaches with technology teams when necessary and validating outputs through hands-on implementation and analysis.
  • Identify new data use cases for proprietary data, ensure appropriate cleaning and normalization techniques so data can be used in statistical, econometric and other commercial analytics applications.
  • Contribute to evolution of the QIG data infrastructure by identifying opportunities for efficiency gains, automation, and scalability.
  • Support the integration of emerging technologies (e.g., ML/AI, advanced lakehouse patterns) into data workflows under guidance from senior team members through hands-on experimentation, prototyping, or coordination with TDS as needed.
  • Coordinate with TDS and PRI on internal data and technology initiatives; contributing hands-on development or feedback where appropriate to scale, optimize, and productionize solutions in support of QIG capabilities.
  • Serve as the key liaison for all external data dependencies; monitor the evolution of 3rd party data products and capabilities, assess their fit against QIG analytical requirements, and produce intake specifications when new sources are approved for integration. As needed, partner with technology teams to evaluate and integrate internally managed data sources.
  • When/where appropriate, maintain a living requirements register and change log that tracks open data engineering requests, their status in the TDS backlog, acceptance criteria, and QIG sign-off outcomes.

Benefits

  • health, vision, and dental insurance
  • flexible spending accounts
  • health savings accounts
  • retirement savings plans
  • life, and disability insurance programs
  • paid and unpaid time away from work

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Number of Employees

5,001-10,000 employees

© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service