Data Engineer

AtlassianAustin, TX
12d

About The Position

Working at AtlassianAtlassian’s can choose where they work – whether in an office, from home, or a combination of the two. That way, Atlassian’s have more control over supporting their family, personal goals, and other priorities. We can hire people in any country where we have a legal entity.Interviews and onboarding are conducted virtually, a part of being a distributed-first company.Atlassian is looking for a Data Engineer to join our Markets Data Engineering Team. You will build top-notch data solutions and applications that inspire important decisions across the organization. You will be reporting to the Senior Data Engineering Manager.A typical day may involve collaborating with partners, you will design data models, acquisition processes, and applications to address needs. With experience in large-scale data processing systems (batch and streaming), you will lead business growth and enhance product experiences. And will collaborate with Technology Teams, Global Analytical Teams, and Data Scientists across programs.You'll take ownership of problems from end-to-end: extracting/cleaning data, and understanding generating systems. Improving the quality of data by adding sources, coding rules, and producing metrics is crucial as requirements evolve. Agility and smart risk-taking are important qualities in this industry where digital innovation meets partner/customer needs over time.

Requirements

  • BS in Computer Science or equivalent experience with 3+ years as a Data Engineer or a similar role
  • Programming skills in Python & Java (good to have)
  • Design data models for storage and retrieval to meet product and requirements
  • Build scalable data pipelines using Spark, Airflow, dbt, AWS data services (Redshift, Athena, EMR), Apache projects (Spark, Flink, Hive, and Kafka) and databricks
  • Experience with modern software development practices (Agile, TDD, CICD) applied to data engineering
  • Enhance data quality through internal tools/frameworks detecting DQ issues. Working knowledge of relational databases and SQL query authoring

Nice To Haves

  • Followed a Kappa architecture with any of your previous deployments
  • Built and ran continuous integration pipelines.
  • Familiar with building pipelines using Databricks. Have some experience with their API's and creating dashboards
  • Contributed to open source projects (Ex: Operators in Airflow)

Responsibilities

  • influence product teams
  • inform Data Science and Analytics Platform teams
  • partner closely with data consumers and producers to ensure quality and usefulness of data assets
  • Defining metrics
  • Instrumenting logging
  • Acquiring/ingesting data
  • Architecting & modeling data
  • Transforming data
  • Ensuring data quality, governance, and enablement
  • Alerting, visualization, and reporting
  • Developing at scale and improving efficiency
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service