Sr. Engineer, Data

LivCorChicago, IL
11d$125,000 - $150,000Hybrid

About The Position

LivCor , a Blackstone portfolio company, is a real estate asset management business specializing in multi-family housing. Formed in 2013 and headquartered in Chicago, LivCor is currently responsible for a portfolio of over 400 Class A and B properties comprising more than 150,000 units in markets across the United States. Our business is focused on making real estate more valuable. But for us, it’s more than that. It’s people first, community always. It’s a life-filled career, not just a career-filled life. It’s doing good work, with good humans, and making a difference. It’s excellence in all its forms. Ultimately, we create great places to work, live, and grow. We do that by focusing on leaving people – and places – better than we found them. Whew! Still with us? Cool. Let’s talk about where you’d fit in: Only read further if you are: Kind Humble Honest Relentless Smart with Heart You should be: Authentic. You do you. Together, we’ll do something amazing. A passionate person with a love for real estate and investing; and believes that helping others win is a noble cause, essential to our success. An excellent team player who enjoys working with others and has strong interpersonal skills. Highly motivated , energeti c, and organized We are seeking a highly skilled Sr. Engineer, Data to join our dynamic data team. The ideal candidate will have extensive experience designing, building, and optimizing data pipelines using ADLS, Databricks, Snowflake, and Python. You will play a critical role in developing scalable data infrastructure to support analytics, machine learning, and business intelligence initiatives. This role will also support data needs of business-critical applications.

Requirements

  • Bachelor’s degree in computer science, Engineering, or a related field (or equivalent experience).
  • 5+ years of experience in data or a related role; 3+ years of experience as a data engineer, software engineer and/or data warehouse engineer.
  • Advanced proficiency in SQL and/or Python for data processing, scripting, and automation.
  • Hands-on experience building and optimizing data pipelines.
  • Strong expertise in data modeling, performance tuning, and SQL development.
  • Experience with a cloud platform (AWS, Azure, or GCP) and their data services.
  • Proficiency in ETL/ELT processes and tools for data integration.
  • Understanding of data schema modeling (dimensions, measures, slowly changing dimensions)
  • Familiarity with version control (e.g., Git) and CI/CD practices for data engineering workflows.
  • Strong problem-solving skills and ability to work with complex, unstructured datasets.
  • Excellent communication skills to collaborate with cross-functional teams.
  • Strong communication skills, with the ability to initiate and drive projects proactively and accurately with a large, diverse team.
  • An overwhelming desire to learn new things and to help people succeed.

Nice To Haves

  • Experience with Azure Databricks Delta Live Tables/Declarative Pipelines, Spark, DuckDB
  • Familiarity with Azure Durable Functions
  • Data Governance Frameworks and Unity Catalog
  • Utilization of Parquet, Delta Lake, and/or Iceberg formats
  • Data Modeling, Inmon vs Kimball (Star-Schema vs 3NF)
  • ELT and/or Bronze, Silver, Gold transformation layers.

Responsibilities

  • Design, develop, and maintain robust data pipelines using Databricks to process and transform large-scale datasets.
  • Write efficient, reusable, and scalable SQL and/or Python code for data ingestion, transformation, and integration.
  • Optimize data workflows for performance, reliability, and cost-efficiency in cloud environments.
  • Collaborate with data scientists, analysts, and stakeholders to understand data requirements and deliver solutions.
  • Implement data governance, security, and compliance best practices within data pipelines.
  • Monitor and troubleshoot data pipeline issues, ensuring high availability and data integrity.
  • Leverage Databricks for advanced analytics, machine learning workflows, and real-time data processing.
  • Integrate data from various sources (APIs, databases, blob, SFTP, streams) into Datalake for centralized storage and querying.
  • Mentor junior data engineers and contribute to team knowledge sharing.
  • Stay updated on emerging data technologies and recommend improvements to existing systems.
  • Performance Tuning and Optimization of all Data Ingestion and Data Integration processes, including the Data Platform and databases

Benefits

  • Generous 401(k) match to help you plan for the future
  • Fertility, adoption, and surrogacy support to grow your family your way
  • Comprehensive health benefits , including medical, dental, and vision
  • Hybrid work model with offices in Chicago, NYC, and Atlanta
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service