Data Engineer I or II

Southern CompanyAtlanta, GA
Hybrid

About The Position

This position will be filled in the Data Analytics team within Technology Services. Data Engineer will be responsible for designing, developing, and maintaining our data infrastructure. This role involves working closely with data scientists, analysts, and other stakeholders to ensure that data pipelines are efficient, reliable, and scalable. The ideal candidate will have a strong background in data engineering, a passion for technology, and a commitment to delivering high-quality solutions.

Requirements

  • Bachelor’s degree in computer science/engineering or related degree preferred
  • Experience with implementing analytics solutions using the Microsoft analytics toolset and Microsoft Azure
  • Experience working in a fast-paced, competitive information technology organization
  • Proficient in languages like SQL, Python, or Java.
  • Writing efficient, scalable, and clean code to handle large datasets and optimize performance.
  • Mastery in relational databases (like MS SQL Server, MySQL, PostgreSQL, Oracle) and NoSQL databases (like MongoDB, Cassandra, DynamoDB), knowing when and how to use each type.
  • Expertise in using platforms like MS SQL Server, Oracle, Amazon Redshift, Google BigQuery, Snowflake, or traditional data warehouses to store, query, and manage large volumes of data.
  • Skills in optimizing queries, indexing, partitioning, and designing efficient database schemas for high performance.
  • Proven ability to design and build robust ETL pipelines (Extract, Transform, Load) for collecting, cleaning, and moving data.
  • Experience working with both batch and real-time data processing frameworks (e.g., Apache Kafka, Apache Flink, Apache Spark, Databricks).
  • Familiarity with orchestration tools like Apache Airflow.
  • Strong knowledge of Medallion Architecture
  • Proficiency in cloud platforms like Azure (SQL Database, Data Lake, Lake House), AWS (S3, Lambda, Redshift), or Google Cloud (BigQuery, Dataflow). Azure is preferred.
  • Ability to design systems that scale efficiently in the cloud, handling big data and increasing demand without sacrificing performance.
  • Experience in data cleansing, validation, and transformation, ensuring that data is accurate, complete, and in the right format for analysis.
  • Expertise in integrating data from diverse sources (internal and external) while resolving issues like inconsistency or format mismatches.
  • Ability to fine-tune queries, databases, and pipelines to reduce latency, optimize resource usage, and speed up data processing.
  • Familiarity with monitoring systems and logging tools to detect, diagnose, and resolve performance or data issues.
  • Ability to translate business requirements into technical solutions, ensuring the correct data is collected and processed for reporting, analytics, and decision-making.
  • A strong ability to troubleshoot and resolve complex data challenges or inconsistencies that can affect the integrity and availability of data.
  • Familiarity with big data technologies such as Spark or Flink for processing large datasets across distributed systems.
  • Experience with data lakes (e.g., Azure Lakehouse, AWS S3, HDFS) for storing raw and unstructured data and building pipelines to process it efficiently.
  • Proven ability to work closely with data scientists, analysts, and other stakeholders to understand data needs and deliver optimal solutions.
  • Ability to explain technical concepts to non-technical stakeholders, ensuring that data infrastructure decisions align with business goals.
  • Knowledge of data privacy regulations (e.g., GDPR, CCPA) and ensuring that systems comply with these laws while managing sensitive data.
  • Implementing strong data access controls, encryption, and monitoring to secure data both at rest and in transit.
  • A strong commitment to keeping up with the rapidly evolving tech landscape, experimenting with and implementing new tools, frameworks, and approaches to data engineering.
  • Adaptability to changing data architectures or business needs, ensuring data systems remain resilient and future-proof.
  • Familiarity with CI/CD pipelines and automation tools (like Jenkins, Docker, Kubernetes) to streamline development and deployment of data engineering solutions.
  • Experience with tools like Terraform or Azure Resource Manager (ARM) to manage data infrastructure efficiently.
  • Results-oriented
  • Innovative
  • Strategic thinker with an enterprise view for sustainable solutions
  • Committed to continuous learning and improvement
  • Committed to the development of others
  • Committed to building and maintaining constructive partnerships with business partners
  • Works well both independently and with others
  • Acts with speed and decisiveness
  • Committed to ethical conduct
  • Lives and works safely

Responsibilities

  • Designing, building, and maintaining robust data pipelines to collect, clean, and process data from various sources using Microsoft Analytics Tool stack on-prem and Azure Cloud (e.g., SSIS, SSAS, SQL Server, Azure Lake House, MS Fabric, Databricks).
  • Ensuring data is stored efficiently for easy access and retrieval (e.g., in data lakes, warehouses).
  • Integrating data from multiple systems, applications, and external sources.
  • Ensuring data is harmonized and available in a format suitable for analysis.
  • Creating and managing databases, ensuring they are optimized for performance, security, and scalability.
  • Handling structured and unstructured data.
  • Monitoring and ensuring the accuracy, consistency, and reliability of data.
  • Implementing processes for data validation, cleansing, and enrichment.
  • Optimizing queries and databases for faster performance and lower latency.
  • Tuning and troubleshooting database performance issues.
  • Working closely with data scientists, analysts, and other stakeholders to understand data needs.
  • Ensuring data is ready and available for analysis, machine learning models, and reporting.
  • Implementing ETL/ELT processes to extract data from different sources, transform it into usable formats, and load it into data warehouses or lakes.
  • Understanding the different ETL/ELT approaches and when to apply each.
  • Automating repetitive tasks related to data processing and integration.
  • Setting up monitoring to track data quality, pipeline performance, and system health.
  • Ensuring data security measures are in place to protect sensitive information.
  • Adhering to data privacy regulations and compliance standards, such as GDPR or CCPA.
  • Evaluating and implementing new tools, frameworks, and technologies to improve data infrastructure.
  • Keeping up to date with the latest trends in big data and cloud technologies.

Benefits

  • Competitive base salary
  • Annual incentive awards for eligible employees
  • Health, welfare and retirement benefits designed to support physical, financial, and emotional/social well-being
  • Additional compensation, such as an incentive program
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service