At U.S. Bank, we’re on a journey to do our best. Helping the customers and businesses we serve to make better and smarter financial decisions and enabling the communities we support to grow and succeed. We believe it takes all of us to bring our shared ambition to life, and each person is unique in their potential. A career with U.S. Bank gives you a wide, ever-growing range of opportunities to discover what makes you thrive at every stage of your career. Try new things, learn new skills and discover what you excel at—all from Day One. Job Description Design and implement scalable data lake solutions using Snowflake and Databricks, Develop and optimize data pipelines for ingestion, transformation, and storage. Manage data governance, quality, and security across cloud environments and implement performance tuning, automation, and CI/CD for data workflows. Collaborate with cross-functional teams to support cloud migration activities Performance Optimization Tune Hadoop, Hive, and Spark jobs and configurations for optimal performance, efficiency, and resource utilization. This includes optimizing queries, managing partitions, and leveraging in-memory capabilities. Troubleshooting and Support Diagnose and resolve issues related to Linux servers, networks, cluster health, job failures, and performance bottlenecks. Provide on-call support and collaborate with other teams to ensure smooth operations. Security, Governance, and Secrets Management Implement and manage security measures within the Cloudera environment, including Kerberos, Apache Ranger, and Atlas, to ensure data governance and compliance. Setup and manage HashiCorp Vault for secure keys and secrets management. Utilize CyberArk for privileged access management and secure administrative tasks on the cluster. Data and Application Migration Migrate Datastage ETL jobs to Azure cloud services such as Azure Synapse Analytics, Azure Databricks, or Snowflake. Ensure data integrity, performance tuning, and validation. Automation and Scripting: Develop scripts (e.g., shell, Ansible, Python) for automating administrative tasks, deployments, and monitoring. Work with users to develop, debug, optimize Hive/Spark/Python programs that connect to the Cloudera environment. Documentation Create and maintain documentation for system configurations, operational procedures, and troubleshooting knowledge bases. Vendor Collaboration Work closely with the vendor to stay current with the latest releases, perform upgrades, and address vulnerabilities.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Mid Level