About The Position

The Fiserv Technology Analyst Program is a two-year, early career development experience designed to accelerate your growth in the fintech industry. Through two structured assignments within a specific technology track, analysts gain hands-on experience, build technical and professional skills, and collaborate across teams to solve real business challenges. As a performance-driven company, Fiserv is committed to developing emerging talent and creating opportunities for those who demonstrate initiative, growth, and impact. The program includes formal training, mentorship, and exposure to senior leadership, supporting long-term career development within the organization. We’re looking for an early-career Data Engineer to help design, build, and support reliable data pipelines that move data from source systems into analytics and operational platforms. You’ll work closely with data engineers, analysts, and stakeholders to ensure data is accurate, timely, and scalable.

Requirements

  • Recent graduate of a bachelor's degree completed in December 2024 or after in Computer Science, Data Science, Management Information Systems, Technical Project Management, or other Technology Degrees
  • 3.0+ GPA
  • 0-2 years of professional work experience
  • Must currently possess valid and unrestricted U.S. work authorization to be considered for this role.
  • Individuals with temporary visas including, but not limited to, F-1 (OPT, CPT, STEM), H-1B, H-2, or TN, or any candidate requiring sponsorship, now or in the future, will not be considered for this role.

Nice To Haves

  • Proficiency in SQL and a solid understanding of relational databases.
  • Basic knowledge of Python or Java (or similar scripting languages) for automation and data processing.
  • Understanding of data warehousing concepts and familiarity with cloud platforms (Azure or similar).
  • Strong problem-solving skills and ability to work with large datasets.
  • Experience with Apache Spark, Hadoop, or similar big data frameworks.
  • Familiarity with Databricks workflows or other workflow orchestration tools.
  • Exposure to Databricks, Azure Data Factory, or GCP Dataflow.
  • Knowledge of CI/CD practices and pipelines for data engineering.

Responsibilities

  • Assist in designing and building data pipelines for data ingestion, transformation, and storage.
  • Develop and maintain pipeline workflows to enable efficient data movement and processing.
  • Support data modeling and schema design for analytics and operational use cases.
  • Monitor, troubleshoot, and improve pipeline performance, reliability, and data quality.

Benefits

  • Formal training
  • Mentorship
  • Exposure to senior leadership
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service