US Bank-posted 9 months ago
$122,117 - $145,000/Yr
Full-time • Mid Level
Chicago, IL
Credit Intermediation and Related Activities

U.S. Bank is seeking a full-time Data Engineer (Multiple openings) in Chicago, IL. The role involves designing, developing, and managing ETL processes using Azure Data Factory (ADF) to efficiently load data from diverse sources into Azure Data Storage. The Data Engineer will administer and optimize large-scale data processing using Azure Databricks and SparkSQL, ensuring high performance and scalability. Responsibilities also include implementing and maintaining data storage solutions utilizing Azure Blob Storage, Azure Data Lake Storage, and Azure SQL Database, as well as designing and implementing data migration strategies from on-premises systems to Azure. The position requires developing and refining HiveQL and SQL queries to enhance data retrieval and processing efficiency in Azure environments, monitoring and managing Hadoop clusters, and implementing DevOps practices for continuous integration and deployment (CI/CD) in big data environments. Ensuring data security and compliance by implementing best practices in data governance, encryption, and access control within Azure environments is also a key responsibility.

  • Design, develop, and manage ETL processes using Azure Data Factory (ADF).
  • Administer and optimize large-scale data processing using Azure Databricks and SparkSQL.
  • Implement and maintain data storage solutions utilizing Azure Blob Storage, Azure Data Lake Storage, and Azure SQL Database.
  • Design and implement data migration strategies from on-premises systems to Azure.
  • Develop and refine HiveQL and SQL queries to enhance data retrieval and processing efficiency.
  • Monitor and manage Hadoop clusters for optimal resource allocation and system performance.
  • Implement DevOps practices for continuous integration and deployment (CI/CD) in big data environments.
  • Ensure data security and compliance by implementing best practices in data governance, encryption, and access control.
  • Troubleshoot and resolve issues related to data ingestion, processing, and storage.
  • Master's degree or equivalent in Information Technology & Management, Computer Science, or Computer Engineering.
  • 3 years of data engineering experience.
  • 24 months of experience with designing and implementing data pipelines using Azure Data Factory (ADF) and Databricks.
  • Experience with designing highly scalable distributed data pipelines using big data technologies including Hadoop, Pig, Hive, and Sqoop.
  • Experience developing Hive ETL scripts for ad-hoc analysis, dashboarding, and reporting.
  • Experience creating UAT test cases and test scenarios.
  • Experience designing dimensional data models and optimizing SQL queries.
  • Healthcare (medical, dental, vision)
  • Basic term and optional term life insurance
  • Short-term and long-term disability
  • Pregnancy disability and parental leave
  • 401(k) and employer-funded retirement plan
  • Paid vacation (from two to five weeks depending on salary grade and tenure)
  • Up to 11 paid holiday opportunities
  • Adoption assistance
  • Sick and Safe Leave accruals of one hour for every 30 worked, up to 80 hours per calendar year.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service