Data Ops Engineer II

ShamrockOverland Park, KS
5d

About The Position

About the Role Shamrock Trading Corporation is seeking a DataOps Engineer who is passionate about data automation and lifecycle management. As a pivotal member of our Data Services team, you will help implement cutting-edge DataOps methodologies to optimize our data engineering and analytics frameworks. Your work will ensure high-quality data solutions and contribute to the overall robustness and reliability of our data systems. What You’ll Do Develop and enhance DataOps platforms and lifecycle processes, independently delivering complex automation and reliability improvements Develop, optimize, and maintain CI/CD pipelines for data workflows using GitHub Actions, AWS CodeBuild/CodeDeploy, or similar tools. Build and scale automated testing frameworks for data validation, schema checks, and regression testingUse a code-first approach to architect and implement data quality checks and monitoring solutions to ensure data accuracy and reliability Contribute to scalable data lake and data warehouse architectures using Databricks, Delta Lake, DABs, and modern data platform patterns Implement and improve monitoring and alerting using Prometheus, CloudWatch, and Grafana. Enhance metadata management and data cataloging using Databricks Unity Catalog. Design and implement robust access controls and governance using AWS IAM, Unity Catalog, and Terraform, ensuring compliance with SOC 2 standards. Deploy, manage, and enhance scalability of infrastructure-as-code using Terraform and AWS services (S3, EKS, Lambda, ECS, IAM). Collaborate closely with engineering and analytics teams to ensure performance, scalability, cost‑efficiency, and identify operational risk in projects Provide tier 1 and 2 operational support for internal reporting and API requests, ensuring timely and secure data delivery Identify data anomalies and perform forensic analysis for conclusive RCA of data issues Communicate clearly during incidents and coordinate cross-team problem solving. Participate in on‑call rotations while improving runbooks and reducing alert noise Independently perform proofs of concept (POCs) to validate solutions for operational issues and translate successful approaches into clear Jira stories for implementation Contribute to the evolution of DataOps best practices, platform maturity, and team standards Create technical documentation that can be understood by the least technical team members

Requirements

  • Bachelor’s degree in Computer Science, Engineering, Analytics or a related field; or equivalent practical experience
  • 3+ years experience with Databricks, including Unity Catalog, Delta Lake, and Spark Streaming or equivalent technologies
  • 3+ years experience in build and deployment technologies such as Docker, AWS Code Build, Code Deploy, ECR, ECS
  • 3+ years using Terraform or equivalent infrastructure as code
  • Proficient in automated testing frameworks and tools
  • 3+ years experience monitoring cloud technology stacks and APIs
  • 5+ years experience with cloud data technologies and modern programming languages including Python, SQL, Java, and Spark
  • 5+ years experience using terminal, CLI tools, and linux

Nice To Haves

  • Experience mentoring junior engineers or leading small technical initiatives
  • Demonstrated ability to work collaboratively in a dynamic team environment with a proactive, positive attitude

Responsibilities

  • Develop and enhance DataOps platforms and lifecycle processes, independently delivering complex automation and reliability improvements
  • Develop, optimize, and maintain CI/CD pipelines for data workflows using GitHub Actions, AWS CodeBuild/CodeDeploy, or similar tools.
  • Build and scale automated testing frameworks for data validation, schema checks, and regression testing
  • Use a code-first approach to architect and implement data quality checks and monitoring solutions to ensure data accuracy and reliability
  • Contribute to scalable data lake and data warehouse architectures using Databricks, Delta Lake, DABs, and modern data platform patterns
  • Implement and improve monitoring and alerting using Prometheus, CloudWatch, and Grafana.
  • Enhance metadata management and data cataloging using Databricks Unity Catalog.
  • Design and implement robust access controls and governance using AWS IAM, Unity Catalog, and Terraform, ensuring compliance with SOC 2 standards.
  • Deploy, manage, and enhance scalability of infrastructure-as-code using Terraform and AWS services (S3, EKS, Lambda, ECS, IAM).
  • Collaborate closely with engineering and analytics teams to ensure performance, scalability, cost‑efficiency, and identify operational risk in projects
  • Provide tier 1 and 2 operational support for internal reporting and API requests, ensuring timely and secure data delivery
  • Identify data anomalies and perform forensic analysis for conclusive RCA of data issues
  • Communicate clearly during incidents and coordinate cross-team problem solving.
  • Participate in on‑call rotations while improving runbooks and reducing alert noise
  • Independently perform proofs of concept (POCs) to validate solutions for operational issues and translate successful approaches into clear Jira stories for implementation
  • Contribute to the evolution of DataOps best practices, platform maturity, and team standards
  • Create technical documentation that can be understood by the least technical team members
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service