Air Apps-posted 12 days ago
Full-time • Mid Level
San Francisco, CA

As a Data Engineer at Air Apps, you will be responsible for designing, building, and optimizing data pipelines, data warehouses, and data lakes to ensure efficient data processing and analytics. You will work closely with data analysts, scientists, and software engineers to create scalable and reliable data infrastructure that supports business intelligence and machine learning initiatives. This role requires expertise in data architecture, ETL processes, and cloud-based data solutions to handle large volumes of structured and unstructured data.

  • Design, build, and maintain scalable data pipelines and ETL workflows to support analytics and reporting.
  • Develop and optimize data warehouses and data lakes using cloud platforms such as AWS, Google Cloud, or Azure .
  • Implement real-time and batch data processing solutions for various business needs.
  • Work with structured and unstructured data , ensuring proper data modeling and storage strategies.
  • Ensure data reliability, consistency, and scalability through best practices in architecture and engineering.
  • Collaborate with data analysts, scientists, and software engineers to enable efficient data access and analysis.
  • Automate data ingestion, transformation, and validation processes to improve data quality.
  • Monitor and optimize query performance and data processing efficiency .
  • Implement security, compliance, and governance standards for data storage and access control.
  • Stay up to date with emerging data engineering trends, tools, and technologies .
  • Around 4+ years of experience in data engineering, software engineering, or database management .
  • Proficiency in SQL, Python, or Scala for data processing and automation.
  • Hands-on experience with cloud-based data solutions (AWS Redshift, Google BigQuery, Azure Synapse, Snowflake).
  • Experience building ETL pipelines with tools such as Apache Airflow, dbt, Talend, or Fivetran .
  • Strong understanding of data modeling, schema design, and database optimization .
  • Experience with big data frameworks (Apache Spark, Hadoop, Kafka, Flink) is a plus.
  • Familiarity with orchestration tools, containerization (Docker, Kubernetes), and CI/CD workflows .
  • Knowledge of data security, governance, and compliance (GDPR, CCPA, SOC 2) .
  • Strong problem-solving and debugging skills with the ability to handle large-scale data challenges .
  • Experience working in fast-paced, data-driven environments with cross-functional teams.
  • Apple hardware ecosystem for work.
  • Annual Bonus .
  • Medical Insurance (including vision & dental).
  • Disability insurance - short and long-term.
  • 401k up to 4% contribution.
  • Air Conference – an opportunity to meet the team, collaborate, and grow together.
  • Transportation budget
  • Free meals at the hub
  • Gym membership
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service