BI ETL Data Developer II or III

Miami University Alumni AssociationOxford, AL

About The Position

The Business Intelligence ETL Data Developer II develops, maintains, and supports scalable data pipelines and analytics solutions that enable data-informed decision-making across the University. In this role, you will take ownership of data pipelines and contribute directly to advancing modern data engineering practices within a cloud-based data ecosystem. You will collaborate with stakeholders and technical partners to deliver reliable, high-performing solutions, with a focus on data quality, maintainability, and continuous improvement. The BI ETL Data Developer III designs, develops, and optimizes scalable data pipelines using modern ELT tools in a cloud-based environment, leads the development of advanced SQL and/or Python-based transformations, and owns production data pipelines including monitoring, performance tuning, troubleshooting, and ensuring reliability and data integrity. This role also involves designing and implementing robust data validation, testing, and quality frameworks, designing scalable data models, partnering with stakeholders to translate complex business requirements into sustainable solutions, driving improvements to existing data pipelines, leading adoption of best practices for version control, testing, deployment, and operational support (CI/CD), developing and maintaining comprehensive documentation, guiding team practices, and mentoring team members.

Requirements

  • Bachelor’s degree in computer science, information technology, or a relevant field earned by date of hire with two to four or more years of relevant experience OR Associate’s degree in computer science, information technology, or a relevant field earned by date of hire and four to six or more years of relevant experience.
  • Ability to analyze complex data and develop practical, scalable solutions.
  • Ability to troubleshoot and resolve data pipeline and data quality issues in a timely manner.
  • Ability to translate business needs into effective technical data solutions.
  • Ability to communicate technical concepts clearly to both technical and non-technical audiences.
  • Ability to work collaboratively across teams and build effective working relationships.
  • Ability to manage multiple priorities and adapt to changing requirements in a dynamic environment.
  • Ability to document solutions and processes to support maintainability and knowledge sharing.
  • Ability to design scalable, maintainable, and efficient data solutions that support enterprise needs.
  • Ability to diagnose and resolve complex technical and data-related issues.
  • Ability to translate complex business challenges into clear, sustainable technical solutions.
  • Ability to communicate complex technical concepts effectively to diverse audiences.
  • Ability to lead technical discussions and influence the adoption of best practices.
  • Ability to prioritize and manage multiple initiatives in a production environment.
  • Ability to mentor others and contribute to the growth and effectiveness of the team.

Nice To Haves

  • Experience developing and maintaining data pipelines using modern ELT tools (e.g., dbt, Fivetran, Airflow, or similar).
  • Experience working with cloud data platforms such as Snowflake, Amazon Redshift, Google BigQuery, or Azure Synapse.
  • Experience using Python for data processing, automation, or integration.
  • Experience implementing data validation, testing, or monitoring solutions.
  • Experience using version control systems (e.g., Git) and contributing to CI/CD workflows.
  • Experience building or supporting data models and analytics solutions.
  • Experience working in Agile or iterative development environments.
  • Experience supporting enterprise data systems in a higher education or similarly complex environment.
  • Experience designing and optimizing scalable ELT pipelines using tools such as dbt, Fivetran, Airflow, or similar.
  • Experience working with cloud data platforms such as Snowflake, Amazon Redshift, Google BigQuery, or Azure Synapse at scale.
  • Strong experience using Python for data processing, automation, and system integration.
  • Experience designing dimensional data models and large-scale data transformations.
  • Experience implementing and managing data quality, testing, and monitoring frameworks.
  • Experience leading or significantly contributing to CI/CD practices for data pipelines.
  • Experience optimizing data pipelines for performance, scalability, and cost efficiency.
  • Experience working in complex organizational environments (e.g., higher education, healthcare, or enterprise settings).

Responsibilities

  • Develop, maintain, and enhance data pipelines using modern ELT tools (e.g., dbt, Fivetran, Airflow, or similar) in a cloud-based environment.
  • Write, optimize, and maintain SQL and/or Python-based transformations that support scalable analytics solutions.
  • Monitor, troubleshoot, and resolve issues in production data pipelines to ensure reliability, performance, and data integrity.
  • Implement data validation, testing, and quality checks to improve the consistency and trustworthiness of data assets.
  • Collaborate with stakeholders and technical partners to translate business needs into scalable data solutions.
  • Support and improve existing data integrations and workflows with a focus on maintainability and performance optimization.
  • Contribute to and follow best practices for version control, testing, and deployment (CI/CD).
  • Create and maintain documentation for data pipelines, models, and system processes.
  • Contribute to team practices that support consistent delivery and continuous improvement.
  • Stay current with emerging data tools and practices and apply them to improve team outcomes.
  • Design, develop, and optimize scalable data pipelines using modern ELT tools (e.g., dbt, Fivetran, Airflow, or similar) in a cloud-based environment.
  • Lead the development of advanced SQL and/or Python-based transformations supporting enterprise analytics and reporting.
  • Own production data pipelines, including monitoring, performance tuning, troubleshooting, and ensuring reliability and data integrity.
  • Design and implement robust data validation, testing, and quality frameworks to ensure trusted data at scale.
  • Design scalable data models and transformation patterns to support enterprise reporting and analytics needs.
  • Partner with stakeholders to translate complex business requirements into sustainable, high-impact data solutions.
  • Drive improvements to existing data pipelines and processes to enhance performance, scalability, and maintainability.
  • Lead adoption of best practices for version control, testing, deployment, and operational support (CI/CD).
  • Develop and maintain comprehensive documentation for data pipelines, models, and architecture.
  • Guide team practices that support consistent delivery, operational excellence, and continuous improvement.
  • Mentor team members and contribute to the growth of technical standards and capabilities across the team.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service