Software Engineer, Data Infrastructure

FIGMASan Francisco, CA
56dRemote

About The Position

The Data Infrastructure team at Figma builds and operates the foundational platforms that power analytics, AI, and data-driven decision-making across the company. We serve a diverse set of stakeholders, including AI Researchers, Machine Learning Engineers, Data Scientists, Product Engineers, and business teams that rely on data for insights and strategy. Our team owns and scales critical data platforms such as the Snowflake data warehouse, ML Datalake, and large-scale data movement and processing applications, managing all data flowing into and out of these platforms. Despite being a small team, we take on high-scale, high-impact challenges. In the coming years, we're focused on building the foundational infrastructure to support AI-powered products, developing streaming interconnects between our core systems, and revamping our orchestration and financial data architecture with a strong emphasis on data quality, reliability, and efficiency. If you're passionate about building scalable, high-performance data platforms that empower teams across Figma, we'd love to hear from you! This is a full time role that can be held from one of our US hubs or remotely in the United States.

Requirements

  • 5+ years of Software Engineering experience, specifically in backend or infrastructure engineering.
  • Experience designing and building distributed data infrastructure at scale.
  • Strong expertise in batch and streaming data processing technologies such as Spark, Flink, Kafka, or Airflow/Dagster.
  • A proven track record of impact-driven problem-solving in a fast-paced environment.
  • A strong sense of engineering excellence, with a focus on high-quality, reliable, and performant systems.
  • Excellent technical communication skills, with experience working across both technical and non-technical counterparts.
  • Experience mentoring and supporting engineers, fostering a culture of learning and technical excellence.

Nice To Haves

  • Experience with data governance, access control, and cost optimization strategies for large-scale data platforms.
  • Familiarity with our stack, including Golang, Python, SQL, frameworks such as dbt, and technologies like Spark, Kafka, Snowflake, and Dagster.
  • Experience designing data infrastructure for AI/ML pipelines.
  • The ability to navigate ambiguity, take ownership, and drive projects from inception to execution.

Responsibilities

  • Design and build large-scale distributed data systems that power analytics, AI/ML, and business intelligence.
  • Develop batch and streaming solutions to ensure data is reliable, efficient, and scalable across the company.
  • Manage data ingestion, movement, and processing through core platforms like Snowflake, our ML Datalake, and real-time streaming systems.
  • Improve data reliability, consistency, and performance, ensuring high-quality data for engineering, research, and business stakeholders.
  • Collaborate with AI researchers, data scientists, product engineers, and business teams to understand data needs and build scalable solutions.
  • Drive technical decisions and best practices for data ingestion, orchestration, processing, and storage.

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Industry

Professional, Scientific, and Technical Services

Education Level

No Education Listed

Number of Employees

1,001-5,000 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service