Senior Engineer - Fraud & Abuse Data Engineering

TargetBrooklyn Park, MN
77d$95,000 - $171,000

About The Position

As a Senior Engineer, you serve as a specialist in the engineering team that supports the product. You help develop and gain insight in the application architecture. You can distill an abstract architecture into concrete design and influence the implementation. You show expertise in applying the appropriate software engineering patterns to build robust and scalable systems. You are an expert in programming and apply your skills in developing the product. You have the skills to design and implement the architecture on your own, but choose to influence your fellow engineers by proposing software designs, providing feedback on software designs and/or implementation. You show good problem solving skills and can help the team in triaging operational issues. You leverage your expertise in eliminating repeat occurrences. Use your skills, experience and talents to be a part of groundbreaking thinking and visionary goals.

Requirements

  • BA/BS or equivalent; 4+ years building large-scale data systems.
  • Proficient in core platforms; writes organized, maintainable code across multiple languages and distributed frameworks.
  • Skilled in package configuration/deployment and building custom solutions.
  • Designs robust tests; troubleshoots and resolves routine and non-routine issues independently.
  • Delivers high-performance, scalable, secure solutions (high throughput/low latency).
  • Operates effectively in Agile: communicates clearly with partners, aligns team priorities, and understands guest/business impact.
  • Influences and applies data/engineering standards and policies; maintains expertise and stays current through ongoing learning.

Nice To Haves

  • Strong knowledge of data governance, security, and compliance best practices.
  • Excellent collaboration and communication skills with a track record of cross-functional partnership.
  • Prior experience mentoring and guiding junior engineers.

Responsibilities

  • Design, build, and operate scalable batch and streaming data pipelines (Kafka) and ETL/ELT workflows across Hadoop and Google Cloud Platform (GCP); implement monitoring/alerting to meet reliability and SLA targets.
  • Develop high-performance distributed processing with Python, Spark, and Hive; optimize jobs, storage, and throughput for large-scale, high-volume datasets and cost efficiency.
  • Deliver curated, trustworthy datasets for analytics, reporting, and ML with strong data quality, lineage, and governance.
  • Partner with data scientists to operationalize ML on GCP (e.g., Vertex AI), building MLOps pipelines for training, deployment, CI/CD, monitoring, and automated retraining.
  • Integrate on-prem Hadoop data lakes with GCP services to enable seamless hybrid data and model workflows.
  • Collaborate with analysts and product engineers to ensure data is accessible, high-quality, and actionable; provide technical mentorship to junior engineers.
  • Uphold security, privacy, and regulatory compliance across all data engineering practices.
  • Continuously evaluate technologies and design patterns, and drive improvements in performance, scalability, and cost across Hadoop and GCP environments.

Benefits

  • Comprehensive health benefits including medical, vision, dental, and life insurance.
  • 401(k) plan.
  • Employee discount.
  • Short term and long term disability.
  • Paid sick leave.
  • Paid national holidays.
  • Paid vacation.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service