Junior Data Engineer

SWBCSan Antonio, TX
Onsite

About The Position

SWBC is seeking a talented individual to join our dynamic Data team. The ideal candidate will have some experience in building and maintaining data pipelines, exposure to AI-ready datasets, machine learning, and a focus on cloud-based data warehouses such as Snowflake or Redshift. This role offers an exciting opportunity to work with cutting-edge technologies and drive impactful data-driven solutions. As a Junior Data Engineer, you’ll gain hands-on experience building and maintaining data solutions that support real business needs. In this role, you will learn how to build and maintain data pipelines with guidance from experienced team members, work closely with cross-functional teams to support data-driven projects, assist in improving data quality, performance, and reliability and gain exposure to modern data tools, technologies, and automation practices. We provide a supportive, team-oriented environment where you’ll receive mentorship and opportunities to grow your technical and professional skills. Our team values collaboration, curiosity, and continuous learning, and we celebrate both progress and success along the way. If you’re motivated, detail-oriented, and excited to launch your career in data engineering, we’d love you to join our team.

Requirements

  • Bachelor’s degree or higher in Computer Science, Engineering, Data Science, or related field.
  • Minimal one to three years’ experience.
  • Minimum one (1) year proven experience with knowledge of modern data warehousing best practices.
  • Advanced proficiency in SQL and experience with databases such as PostgreSQL, MySQL, Snowflake, Redshift, or similar platforms.
  • Hands‑on experience building cloud‑based data pipelines.
  • Experience with orchestration tools such as Airflow, AWS Step Functions, or similar.
  • Strong understanding of data modeling, ELT/ETL patterns, and data quality frameworks.
  • Excellent problem‑solving, communication, and collaboration skills.
  • Proficiency in Python, Java, or Scala, particularly for data transformation and pipeline development.
  • Experience enabling machine learning data pipelines, including feature engineering and training data preparation.
  • Familiarity with feature stores, vector databases, or AI‑related data architectures.
  • Experience with modern ingestion and transformation tools, including Openflow, DBT, AWS Glue, SSIS, Fivetran, and Lambda‑based pipelines.
  • Exposure to MLOps concepts, such as data versioning, lineage, and reproducible pipelines.

Nice To Haves

  • Cloud certifications (AWS, Azure, or GCP) preferred.
  • Experience working in Agile/Scrum development environments.

Responsibilities

  • Designs, develops, and maintains scalable, secure, and cost‑efficient data pipelines to ingest, transform, and serve structured and unstructured data for analytics, ML, and AI workloads.
  • Builds and manages cloud‑native data architectures (data warehouse, lakehouse, and streaming) that support BI, advanced analytics, and machine learning.
  • Implementing modern ingestion and ELT pipelines using tools such as Openflow, Snowpipe‑style services, and third‑party ingestion frameworks.
  • Collaborates with data scientists, ML engineers, and business stakeholders to understand feature, training, and inference data requirements.
  • Develops and maintains high‑quality, AI‑ready datasets, including feature tables, historical snapshots, and time‑aware datasets for model training.
  • Implementing and enforces data quality, data validation, and data observability controls critical for downstream analytics and AI reliability.
  • Designs and evolves enterprise‑scale data models, including canonical, analytical, and feature‑oriented schemas.
  • Optimizing pipelines for performance, reliability, scalability, and cost across batch and near‑real‑time workloads.
  • Enables access to curated data for GenAI use cases, including text datasets, embeddings, and metadata supporting search and retrieval patterns.
  • Applies data governance, security, and privacy best practices to ensure trusted and compliant data usage for analytics and AI.
  • Stays current with emerging technologies and best practices in data engineering, cloud platforms, and AI‑related data infrastructure.

Benefits

  • Competitive overall compensation package
  • Work/Life balance
  • Employee engagement activities and recognition awards
  • Years of Service awards
  • Career enhancement and growth opportunities
  • Leadership Academy and Mentor Program
  • Continuing education and career certifications
  • Variety of healthcare coverage options
  • Traditional and Roth 401(k) retirement plans
  • Lucrative Wellness Program
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service