Pennymac (NYSE: PFSI) is a specialty financial services firm with a comprehensive mortgage platform and integrated business focused on the production and servicing of U.S. mortgage loans and the management of investments related to the U.S. mortgage market. At Pennymac, our people are the foundation of our success and at the heart of our dynamic work culture. Together, we work towards a unified goal of helping millions of Americans achieve aspirations of homeownership through the complete mortgage journey. The Sr. Data Platform Engineer - Python/AWS Specialist leads the design, development, and management of our enterprise data pipeline infrastructure, with a primary focus on Python-based solutions and AWS cloud services. This role supports critical business functions through sophisticated data engineering, including pricing analytics, trading systems, hedging models, and pooling operations, ensuring scalable, performant, and reliable data solutions across the organization. The Sr. Data Platform Engineer - Python/AWS Specialist will: Advanced Python Development - Architect, develop, and maintain production-grade Python applications using Object-Oriented Programming, design patterns, and software engineering best practices for enterprise data pipelines Expert AWS Cloud Services - Design and implement cloud-native data solutions using AWS services including Lambda, Glue, Step Functions, S3, EventBridge, SQS/SNS, and Kinesis Data Pipeline Architecture - Lead the design of scalable ETL/ELT pipelines using Python frameworks such as Apache Airflow, Prefect, or AWS Step Functions for orchestration API Development & Integration - Build and maintain RESTful APIs using FastAPI or Flask for data services, microservices, and system integrations Serverless & Event-Driven Architecture - Design event-driven data pipelines leveraging AWS Lambda, EventBridge, and serverless patterns for real-time and batch processing Infrastructure as Code - Implement and manage cloud infrastructure using CloudFormation, CDK, or Terraform for reproducible and version-controlled deployments Experience with Python data frameworks (e.g., Pandas, NumPy, SQLAlchemy, PySpark) for data transformation and analysis Strong experience with SQL and database technologies for data pipeline development and optimization Experience with containerization (Docker) and container orchestration (ECS, Kubernetes) for deploying Python services Experience with Git, CI/CD pipelines, and collaborative development workflows Experience with comprehensive testing strategies including unit testing, integration testing, and data validation frameworks (pytest, Great Expectations) Knowledge of DataOps practices (CI/CD for data pipelines, automated testing, monitoring) Knowledge of Agile, Scrum, Jira methodologies
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Mid Level