Spark with Kubernetes Design, develop, and optimize large-scale data pipelines using Apache Spark (batch & streaming) Build and deploy containerized Spark workloads on Kubernetes (EKS/GKE/AKS or on-prem K8s) Architect cloud-native data platforms with high availability, scalability, and fault tolerance Implement end-to-end ETL/ELT pipelines for structured and semi-structured data Tune Spark jobs for performance, memory, and cost optimization Manage data orchestration using tools like Airflow / Argo / Dagster Integrate with data sources such as Kafka, cloud object storage (S3/GCS/ADLS), RDBMS, NoSQL Ensure data quality, governance, lineage, and observability Collaborate with DevOps teams on CI/CD, monitoring, and security best practices Mentor junior engineers and participate in architecture and design reviews Compensation, Benefits and Duration Compensation is based on actual experience and qualifications of the candidate. The above is a reasonable and a good faith estimate for the role. Medical, vision, and dental benefits, 401k retirement plan, variable pay/incentives, paid time off, and paid holidays are available for full time employees. This position is not available for independent contractors No applications will be considered if received more than 120 days after the date of this post
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Mid Level
Education Level
No Education Listed