Informa Group Plc.-posted about 4 hours ago
$110,000 - $140,000/Yr
Full-time • Mid Level
Hybrid • New York, NY
5,001-10,000 employees

Curinos empowers financial institutions to make better, faster and more profitable decisions through industry-leading proprietary data, technologies and insights. With decades-long expertise in the financial services industry and a relentless focus on the future, Curinos technology and analytics ecosystem allows clients to anticipate customer needs and optimize their go-to market decisions in an increasingly competitive market. We operate in a hybrid/remote model, and this position is fully remote in the United States or hybrid in the Greater New York, Chicago, or Boston metropolitan areas. Curinos is hiring a Senior Data Engineer to join our Acquire Product team. This team is building a next generation B2B SaaS application that empowers financial institutions to attract customers aligned to their portfolio strategy across products. You’ll work with a diverse team of talented engineers, AI and ML scientists, and product managers to deliver data-driven and high-quality software solutions that meet the needs of our clients.

  • Design, build, and maintain scalable and reliable data pipelines, including ingestion, storage, validation, governance, monitoring, and ELT patterns
  • Proactively identify opportunities and implement solutions to optimize pipeline operations and reduce manual work
  • Develop and maintain ETL data pipelines for large volumes of data, writing clean, maintainable, and efficient code
  • Maintain and evangelize high standards in software and data engineering best practices (SDLC, testing, scaling, performance)
  • Work closely with product managers, data scientists, and software engineers to create and prepare datasets from disparate sources to support the development and delivery of cutting-edge ML models and interactive application
  • Proactively identify and implement new technologies and improve our reusable tools
  • Conduct code reviews and provide feedback
  • Participate in agile ceremonies
  • Stay current with emerging technologies and industry best practices
  • Advanced proficiency in Python and SQL
  • Required experience with Spark, including distributed computing and optimization
  • Extensive experience with data pipeline and workflow management tools (e.g., Airflow, Databricks, Glue)
  • Strong understanding of data engineering best practices (ETL/ELT, monitoring, observability, validations)
  • Experience with big data, scaling data processing, and distributed computing
  • Strong analytical skills—ability to explore, interpret, and transform raw data into structured, high-quality datasets that support business decisions and advanced analytics
  • Production support experience
  • Minimum 3+ years of professional experience
  • Self-discipline and eagerness to learn new skills, tools, and technologies
  • Solid verbal and written communication skills
  • Preferred experience with Databricks (Unity Catalog, Feature Store, Delta Live Tables)
  • Competitive benefits, including a range of Financial, Health and Lifestyle benefits to choose from
  • Flexible working options, including home working, flexible hours and part time options, depending on the role requirements – please ask!
  • Competitive annual leave, floating holidays, volunteering days and a day off for your birthday!
  • Learning and development tools to assist with your career development
  • Work with industry leading Subject Matter Experts and specialist products
  • Regular social events and networking opportunities
  • Collaborative, supportive culture, including an active DE&I program
  • Employee Assistance Program which provides expert third-party advice on wellbeing, relationships, legal and financial matters, as well as access to counselling services
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service