Control-M Developer with Data Engineering Experience

TMS LLCJersey City, NJ
12hRemote

About The Position

We are seeking an experienced Control-M Developer with strong Data Engineering expertise to design, develop, and manage enterprise-scale batch scheduling and data pipeline workflows. The ideal candidate will have hands-on experience in Control-M automation, data integration, and end-to-end data pipeline orchestration across modern data platforms.

Requirements

  • 5+ years of hands-on experience as a Control-M Developer / Scheduler
  • Strong experience in Data Engineering or ETL development
  • Proficiency in Unix/Linux shell scripting
  • Strong SQL skills for data validation and troubleshooting
  • Experience supporting enterprise batch processing environments
  • Strong analytical and troubleshooting skills
  • Ability to work in fast-paced, production-critical environments
  • Excellent communication and cross-team collaboration
  • Ownership mindset with attention to detail

Nice To Haves

  • Experience with Python, Spark, or Scala
  • Exposure to cloud-based data platforms (Snowflake, Databricks, BigQuery)
  • Knowledge of CI/CD tools (Git, Jenkins, Azure DevOps)
  • Experience with monitoring tools and SLA reporting
  • Control-M certification is a plus

Responsibilities

  • Design, develop, and maintain Control-M job flows for complex batch and data workflows
  • Create and manage job dependencies, calendars, conditions, and alerts
  • Monitor, troubleshoot, and optimize batch failures and performance issues
  • Implement job automation, reruns, recovery, and SLA management
  • Collaborate with application, data, and infrastructure teams to ensure seamless scheduling
  • Develop and support data pipelines using ETL/ELT tools and scripting
  • Integrate Control-M with data platforms such as: Data warehouses (Snowflake, Redshift, BigQuery, Teradata, Oracle, SQL Server) Big data ecosystems (Hadoop, Spark)
  • Orchestrate workflows involving file transfers (SFTP, FTP), APIs, and cloud storage
  • Support data ingestion, transformation, validation, and downstream consumption
  • Ensure data quality, reliability, and performance across pipelines
  • Schedule and monitor workloads in AWS / Azure / GCP environments
  • Integrate Control-M with cloud-native services (S3, ADLS, Lambda, Databricks, etc.)
  • Use shell scripting / Python for automation and data processing
  • Implement CI/CD best practices for job and pipeline deployments
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service