Atlassian-posted 9 months ago
$168,700 - $271,100/Yr
Full-time • Senior
San Francisco, CA
Publishing Industries

Atlassian is looking for a Principal Data Engineer to join our Corporate Data Engineering team which is responsible for building our data lake, maintaining our big data pipelines/services and facilitating the movement of billions of messages each day. We work directly with the business stakeholders and plenty of platform and engineering teams to enable growth and retention strategies at Atlassian. We are looking for an open-minded, structured thinker who is passionate about building services that scale. On a typical day you will help our stakeholder teams ingest data faster into our data lake, you'll find ways to make our data pipelines more efficient, or even come up ideas to help instigate self-serve data engineering within the company. Then you will move on to building micro-services, architecting, designing, and enabling self serve capabilities at scale to help Atlassian grow. We are a team with little legacy in our tech stack and as a result you'll spend less time paying off technical debt and more time identifying ways to make our data usable and improve our users experience.

  • Help stakeholder teams ingest data faster into the data lake.
  • Make data pipelines more efficient.
  • Instigate self-serve data engineering within the company.
  • Build micro-services and architect solutions.
  • Design and enable self-serve capabilities at scale.
  • 12+ years of experience in a Data Engineer role as an individual contributor.
  • At least 2 years of experience as a tech lead for a Data Engineering team.
  • Track record of driving and delivering large and complex efforts.
  • Great communicator with essential cross-team and cross-functional relationships.
  • Experience with building streaming pipelines with a micro-services architecture for low-latency analytics.
  • Experience with relational databases (e.g. SQL), Spark, and column stores (e.g. Redshift).
  • Experience building scalable data pipelines using Spark with Airflow or similar tools.
  • Experience with AWS data services (Redshift, Athena, EMR) or similar Apache projects (Spark, Flink, Hive, Kafka).
  • Understanding of Data Engineering tools/frameworks and standards.
  • Experience with data architecture across Enterprise CRM, Commerce, and Finance Systems.
  • Built pipelines using Databricks and is well-versed with their APIs.
  • Health coverage
  • Paid volunteer days
  • Wellness resources
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service