About The Position

Data plays a critical role in every facet of the Goldman Sachs business. The Data Engineering group is at the core of that offering, focusing on providing the platform, processes, and governance, for enabling the availability of clean, organized, and impactful data to scale, streamline, and empower our core businesses. Within Data Engineering, we run and operate some of Goldman Sachs largest platforms, our clients are engineers and analyst across all business units that depend on our platforms for daily business deliverables. As a Site Reliability Engineer (SRE) on the Data Engineering team, you will be responsible for observability, cost and capacity with operational accountability for some of Goldman Sachs’s largest data platforms. We are engaged in the full lifecycle of platforms from design to demise with an adapted SRE strategy to the lifecycle.

Requirements

  • Bachelor or Masters degree in a computational field (Computer Science, Applied Mathematics, Engineering, or in a related quantitative discipline)
  • 1-4+ years of relevant work experience in a team-focused environment
  • 1-2 years hands on developer experience at some point in career
  • Understanding and experience of DevOps and SRE principles and automation, managing technical and operational risk
  • Experience with cloud infrastructure (AWS, Azure, or GCP)
  • Proven experience in driving strategy with data
  • Deep understanding of multi-dimensionality of data, data curation and data quality, such as traceability, security, performance latency and correctness across supply and demand processes
  • In-depth knowledge of relational and columnar SQL databases, including database design
  • Expertise in data warehousing concepts (e.g. star schema, entitlement implementations, SQL v/s NoSQL modelling, milestoning, indexing, partitioning)
  • Excellent communications skills and the ability to work with subject matter experts to extract critical business concepts
  • Independent thinker, willing to engage, challenge or learn
  • Ability to stay commercially focused and to always push for quantifiable commercial impact
  • Strong work ethic, a sense of ownership and urgency
  • Strong analytical and problem-solving skills
  • Ability to build trusted partnerships with key contacts and users across business and engineering teams

Nice To Haves

  • Understanding of Data Lake / Lakehouse technologies incl. Apache Iceberg
  • Experience with cloud databases (e.g. Snowflake, Big Query
  • Understanding concepts of data modelling
  • Working knowledge of open-source tools such as AWS lambda, Prometheus
  • Experience coding in Java or Python

Responsibilities

  • Drive adoption of cloud technology for data processing and warehousing
  • You will drive SRE strategy for some of GS largest platforms including Lakehouse and Data Lake
  • Engage with data consumers and producers to match reliability and cost requirements
  • You will drive strategy with data Relevant Technologies: Snowflake, AWS, Grafana, PromQL, Python, Java, Open Telemetry, Gitlab Basic
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service