Data Operations Engineer

Calamos InvestmentsNaperville, IL
18d

About The Position

Monitor data pipeline performance, job execution, and system health using observability tools and dashboards Respond to data pipeline alerts and incidents within defined service level agreements (SLAs) Investigate and troubleshoot data quality issues, pipeline failures, and system anomalies Perform root cause analysis on operational incidents and document findings Participate in on-call rotation to provide support for production systems Optimize and support large-scale ETL/ELT pipelines using Databricks (PySpark, Spark SQL, Delta Lake) Coordinate production deployments and releases with the Data Engineering team Maintain and execute operational runbooks and standard operating procedures Identify opportunities to improve pipeline reliability, performance, and efficiency Work with business stakeholders to understand data requirements and operational needs Manage incident tickets and service requests through Jira Service Management Communicate status updates on incidents and issues to technical and non-technical leadership Equal Opportunity Employer This employer is required to notify all applicants of their rights pursuant to federal employment laws. For further information, please review the Know Your Rights notice from the Department of Labor.

Requirements

  • Bachelor's degree in Computer Science, Information Systems, or related technical field
  • 2-4 years of experience in operational data management
  • Proficiency in Python and SQL for data manipulation and analysis
  • Experience with operational support, monitoring, alerting, and incident management
  • Demonstrated ability to debug and troubleshoot complex technical issues under pressure
  • Experience with query optimization and performance tuning
  • Strong analytical and problem-solving skills
  • Excellent written and verbal communication skills
  • Ability to work effectively both independently and as part of a team
  • Detail-oriented with strong organizational skills and ability to manage multiple priorities
  • Hands-on experience with cloud data platforms (Azure, AWS, or GCP)
  • Familiarity with Databricks Unity Catalog, Delta Live Tables, and Workflows
  • Experience working with ticketing and service desk applications (Jira Service Management, ServiceNow, etc.)
  • Knowledge of CI/CD practices and deployment automation
  • Experience with data observability and monitoring tools

Nice To Haves

  • Background in financial services or investment management industry
  • Familiarity with .NET development
  • Experience with Agile/Scrum methodologies
  • Understanding of data governance and security best practices
  • Possess enthusiasm for technology outside of work-related activities

Responsibilities

  • Monitor data pipeline performance, job execution, and system health using observability tools and dashboards
  • Respond to data pipeline alerts and incidents within defined service level agreements (SLAs)
  • Investigate and troubleshoot data quality issues, pipeline failures, and system anomalies
  • Perform root cause analysis on operational incidents and document findings
  • Participate in on-call rotation to provide support for production systems
  • Optimize and support large-scale ETL/ELT pipelines using Databricks (PySpark, Spark SQL, Delta Lake)
  • Coordinate production deployments and releases with the Data Engineering team
  • Maintain and execute operational runbooks and standard operating procedures
  • Identify opportunities to improve pipeline reliability, performance, and efficiency
  • Work with business stakeholders to understand data requirements and operational needs
  • Manage incident tickets and service requests through Jira Service Management
  • Communicate status updates on incidents and issues to technical and non-technical leadership
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service