Big Data Architect / Data Architect (Hands-On Coding)

Chabez TechAtlanta, GA
3dOnsite

About The Position

About Chabez Tech Chabez Tech is a Data Engineering and AI Products company delivering scalable, cloud-native solutions that help organizations achieve operational and customer excellence. Headquartered in Pennsylvania, we partner with enterprises to digitally transform their data ecosystems through modern data engineering, AI, and platform solutions. We specialize in building robust data platforms, accelerating product initiatives, and enabling operational efficiency through flexible, customizable solutions across industries. Role Overview We are seeking a highly experienced Big Data Architect / Data Architect with strong hands-on coding expertise to design, build, and scale modern data platforms. This role requires deep expertise in data architecture, distributed systems, and cloud-native big data technologies, along with the ability to contribute directly through development.

Requirements

  • 15+ years of experience in Data Engineering / Data Architecture
  • Strong hands-on coding experience in: Python (preferred) Advanced SQL Scala or Java (nice to have)
  • Strong understanding of distributed systems and large-scale data processing
  • Expertise in data modeling (dimensional, relational, NoSQL)
  • Proven experience designing enterprise-grade data architectures
  • Big Data Frameworks: Spark, Hadoop, Kafka
  • Databases / Data Warehouses: Snowflake, Redshift, BigQuery, Azure Synapse
  • Data Lakes: Amazon S3, Azure Data Lake Storage (ADLS), Google Cloud Storage (GCS)
  • ETL / ELT Tools: Airflow, dbt, Informatica, Talend (any one or more)
  • Experience with at least one major cloud provider: AWS or Azure or GCP
  • Strong experience designing and implementing cloud-native data architectures

Nice To Haves

  • Experience working in large enterprise environments
  • Strong communication and technical leadership skills
  • Ability to mentor engineers and influence architectural decisions

Responsibilities

  • Design and implement end-to-end big data architectures for large-scale, high-volume systems
  • Lead data modeling efforts across dimensional, relational, and NoSQL models
  • Develop and optimize distributed data processing pipelines
  • Provide hands-on development using Python, SQL, and Spark-based frameworks
  • Architect and implement cloud-native data solutions
  • Collaborate with cross-functional teams to deliver scalable, secure, and high-performance data platforms
  • Ensure best practices for performance, scalability, reliability, and data governance
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service