At Bank of America, we are guided by a common purpose to help make financial lives better through the power of every connection. We do this by driving Responsible Growth and delivering for our clients, teammates, communities and shareholders every day. Being a Great Place to Work is core to how we drive Responsible Growth. This includes our commitment to being an inclusive workplace, attracting and developing exceptional talent, supporting our teammates’ physical, emotional, and financial wellness, recognizing and rewarding performance, and how we make an impact in the communities we serve. Bank of America is committed to an in-office culture with specific requirements for office-based attendance and which allows for an appropriate level of flexibility for our teammates and businesses based on role-specific considerations. At Bank of America, you can build a successful career with opportunities to learn, grow, and make an impact. Join us! Position Summary: Data Platform Engineering & DevOps Design, implement, and maintain CI/CD pipelines for enterprise data processing and ingestion. Automate build, test, and deployment workflows for Spark, Hive, Kafka, and real‑time jobs. Establish standards ensuring reliability, scalability, and repeatability across environments. Infrastructure Provisioning & Platform Operations Provision and manage Hadoop and distributed compute clusters using Ansible, Mesos, and Marathon. Lead lifecycle management, including upgrades, expansions, and decommissioning. Support modernization initiatives across thousands of nodes and multi‑tenant workloads. Monitoring, Observability & Resiliency Implement observability solutions: Metrics dashboards with Grafana/Prometheus Centralized logs via Elasticsearch Job and platform monitoring with ITRS, Dynatrace, and similar tools Ensure stability, SLA adherence, and rapid incident response for critical pipelines. Containerization & Cloud‑Native Enablement Dockerize ingestion and processing components, including C3 workloads. Enable and operate Spark workloads on Kubernetes. Promote container execution models for scalability and operational efficiency. Workflow Orchestration Design and manage complex workflows and DAGs using Oozie, Autosys, and Marathon. Ensure fault‑tolerant, auditable, and reliable pipeline execution. Define orchestration standards for onboarding new applications and business lines. Data Lake & Platform Engineering Scope Performance Optimization Tune Impala, YARN, and Kudu for multi‑tenant performance and fairness. Optimize Spark executor memory, shuffle behavior, and resource allocation. Configure storage and compute parameters for high‑throughput processing. Data Lake Management Support and optimize data formats on HDFS. Maintain efficient partitioning strategies for Hive, Kudu, and HBase. Enable scalable, governed access for analytics and operational workloads. Security & Compliance Apply and manage fine‑grained access controls using Apache Ranger. Integrate Kerberos and ensure encryption in transit and at rest. Maintain compliance with regulatory, audit, and enterprise standards. Enterprise Leadership & Strategic Responsibilities Partner with senior executives across Markets, Risk, and Banking to align platform strategy. Lead modernization efforts, including Hadoop upgrades and end-of-life remediation for 2,000+ nodes. Coordinate resiliency and disaster‑recovery testing for 100+ tenant applications. Drive cross‑business enablement through unified governance, cataloging, and data‑management services. Onboard and integrate AI, data‑modeling, and GenAI platforms (C3, Talend, AskGPS). Provide capacity planning, infrastructure forecasting, and financial guidance for annual investment planning
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Mid Level