Bank of America-posted 7 days ago
Full-time • Mid Level
Onsite • Chicago, IL
5,001-10,000 employees

At Bank of America, we are guided by a common purpose to help make financial lives better through the power of every connection. We do this by driving Responsible Growth and delivering for our clients, teammates, communities and shareholders every day. Being a Great Place to Work is core to how we drive Responsible Growth. This includes our commitment to being an inclusive workplace, attracting and developing exceptional talent, supporting our teammates’ physical, emotional, and financial wellness, recognizing and rewarding performance, and how we make an impact in the communities we serve. Bank of America is committed to an in-office culture with specific requirements for office-based attendance and which allows for an appropriate level of flexibility for our teammates and businesses based on role-specific considerations. At Bank of America, you can build a successful career with opportunities to learn, grow, and make an impact. Join us! This job is responsible for developing and delivering complex requirements to accomplish business goals. Key responsibilities of the job include ensuring that software is developed to meet functional, non-functional and compliance requirements, and solutions are well designed with maintainability/ease of integration and testing built-in from the outset. Job expectations include a strong knowledge of development and testing practices common to the industry and design and architectural patterns. Hadoop Engineer (SME) role supporting NextGen Platforms built around Big Data Technologies (Hadoop, Spark, Kafka, Impala, Hbase, Docker-Container, Ansible and many more). Requires experience in cluster management of vendor based Hadoop and Data Science (AI/ML) products like Cloudera, Databricks, Snowflake, Talend, Greenfield, ELK, KPMG Ignite etc. Hadoop Engineer is involved in the full life cycle of an application and part of an agile development process. They require the ability to interact, develop, engineer, and communicate collaboratively at the highest technical levels with clients, development teams, vendors and other partners. The following section is intended to serve as a general guideline for each relative dimension of project complexity, responsibility and education/experience within this role.

  • Codes solutions and unit test to deliver a requirement/story per the defined acceptance criteria and compliance requirements
  • Designs, develops, and modifies architecture components, application interfaces, and solution enablers while ensuring principal architecture integrity is maintained
  • Mentors other software engineers and coach team on Continuous Integration and Continuous Development (CI-CD) practices and automating tool stack
  • Executes story refinement, definition of requirements, and estimating work necessary to realize a story through the delivery lifecycle
  • Performs spike/proof of concept as necessary to mitigate risk or implement new ideas
  • Automates manual release activities
  • Designs, develops, and maintains automated test suites (integration, regression, performance)
  • Works on complex, major or highly visible tasks in support of multiple projects that require multiple areas of expertise
  • Team member will be expected to provide subject matter expertise in managing Hadoop and Data Science Platform operations with focus around Cloudera Hadoop, Jupyter Notebook, OpenShift, Docker-Container Cluster Management and Administration
  • Integrates solutions with other applications and platforms outside the framework
  • He / She will be responsible for managing platform operations across all environments which includes upgrades, bug fixes, deployments, metrics / monitoring for resolution and forecasting, disaster recovery, incident / problem / capacity management
  • Serves as a liaison between client partners and vendors in coordination with project managers to provide technical solutions that address user needs
  • 5+ years experience with Hadoop, Kafka, Spark, Impala, Hive, Hbase etc
  • Strong knowledge of Hadoop Architecture, HDFS, Hadoop Cluster and Hadoop Administrator's role
  • Intimate knowledge of fully integrated AD/Kerberos authentication
  • Experience setting up optimum cluster configurations
  • Debugging knowledge of YARN
  • Hands-on with analyzing various Hadoop log files, compression, encoding, & file formats
  • Expert level knowledge of Cloudera Hadoop components such as HDFS, Sentry, HBase, Kafka, Impala, SOLR, Hue, Spark, Hive, YARN, Zookeeper and Postgres
  • Strong technical knowledge: Unix/Linux; Database (Sybase/SQL/Oracle), Java, Python, Perl, Shell scripting, Infrastructure
  • Experience in Monitoring & Alerting, and Job Scheduling Systems
  • Being comfortable with frequent, incremental code testing and deployment
  • Strong grasp of automation / DevOps tools – Ansible, Jenkins, SVN, Bitbucket
  • Skills: Application Development, Automation, Influence, Solution Design, Technical Strategy Development, Architecture, Business Acumen, DevOps Practices, Result Orientation, Solution Delivery Process, Analytical Thinking, Collaboration, Data Management, Risk Management, Test Engineering
  • Experience working on Big Data Technologies
  • Cloudera Admin / Dev Certification
  • Certification in Cloud, Docker-Container, OpenShift Technologies
  • This role is eligible to participate in the annual discretionary plan. Employees are eligible for an annual discretionary award based on their overall individual performance results and behaviors, the performance and contributions of their line of business and/or group; and the overall success of the Company.
  • This role is currently benefits eligible.
  • We provide industry-leading benefits, access to paid time off, resources and support to our employees so they can make a genuine impact and contribute to the sustainable growth of our business and the communities we serve.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service