Sr Associate, Sr Data Engineer

New York LifeLebanon, NJ
239d$95,000 - $162,500

About The Position

Our New York Life culture has laid the foundation for over 175 years of commitment to our employees, agents, policy owners, and the communities where we live and work. Here you become a valued part of a welcoming, inclusive, and caring organization with a long-standing legacy in stability and growth. The strength revolves around our diversified, multi-dimensional business portfolio that goes beyond life insurance. As a Fortune 100 company and industry leader, we provide an environment where you can explore your career ambitions, offering opportunities to tackle meaningful challenges and stretch your skills while balancing work and life priorities. You will be part of an inclusive team guided by our belief to always be there for each other-providing the support and flexibility to grow and reach new heights while making an impact in the lives of others. You are our future, and we commit to investing in you accordingly. As part of AI&D, you'll have the opportunity to contribute to groundbreaking initiatives that shape New York Life's digital landscape. Leverage cutting-edge technologies like Generative AI to increase productivity, streamline processes, and create seamless experiences for clients, agents, and employees. Your expertise fuels innovation, agility, and growth — driving the company's success.

Requirements

  • Eight or more years of experience in enterprise-level delivery of Data Engineering solutions
  • Deep understanding of modern data architecture, including experience with data lakes, data warehouses, data marts, relational and dimensional modeling, data quality, and master data management
  • Strong background in Operational Data Stores, Dimensional Modeling, and supporting application data architecture
  • Experience with Redshift, Snowflake, Databricks SQL, Oracle, Postgres, MySQL, and understanding of best practice architectural concepts for relational data models
  • 8 + Years of design and implementation of ETL/ELT framework for complex warehouses/data marts
  • Proven experience in ETL/ELT tools IDMC, DBT, AWS Glue
  • Proficiency with languages such as Python, PySpark
  • Strong knowledge on AWS Cloud technologies
  • Strong, demonstratable hands-on experience with data engineering and data pipelines
  • Proficient in cloud architecture and best practices, particularly with AWS Cloud tools such as Amazon RDS, Redshift, AWS DMS, AWS Glue, Amazon S3, AWS Lambda, and more
  • Hands-on development mentality, with a willingness to troubleshoot and solve complex problems
  • Identify, recommend, and implement ELT processes and architecture improvements
  • Assist and verify design of solution and production of all design phase deliverables
  • Bachelor's degree in computer science or an Engineering discipline
  • Certification in data management (structure and unstructured), and/or cloud architecture or engineering a plus
  • Certification in delivery methodologies such as Scaled Agile a plus

Responsibilities

  • Engages in collaborative relationships across architecture, product management, data and product delivery teams to design, implement solutions based on defined business challenges and outcomes
  • Participate on scrum teams working in short sprints and ensure on time delivery with high quality
  • Build and maintain Data engineering solutions on cloud platforms using AWS or Azure services
  • Design, develop, and implement scalable data transformations and ETL/ELT processes using Python, PySpark, and or Data Integration tools IDMC, DBT
  • Collaborate with Data Architects and Data analysts to understand data requirements and translate them into scalable, high performant data integration and pipeline solutions
  • Develop and maintain data models and schemas to support data integration and analysis
  • Monitor and troubleshoot data pipeline performance, identifying and resolving bottlenecks and issues
  • Optimize and tune data integration pipelines for performance, reliability, and scalability
  • Implement data quality and validation checks to ensure accuracy and integrity of data through testing
  • Build the infrastructure required for optimal ETL/ELT of data from a wide variety of data sources using SQL and AWS big data technologies
  • Develop and debug code in PySpark/Spark/Python. Strong SQL and query optimization techniques
  • Maintaining high degree of knowledge and expertise in cloud-based technologies contributing to go forward best practices, standards and patterns
  • Active participant in relevant communities of practice to exchange ideas and knowledge

Benefits

  • Full package of benefits for employees
  • Leave programs
  • Adoption assistance
  • Student loan repayment programs
  • Annual discretionary bonus eligibility
  • Incentive program participation eligibility

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Senior

Industry

Insurance Carriers and Related Activities

Education Level

Bachelor's degree

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service