AWS Data Engineer II

LPL FinancialSan Diego, CA
3d$44 - $74Hybrid

About The Position

At LPL Financial, we empower professionals to shape their success while helping clients pursue their financial goals with confidence. What if you could have access to cutting-edge resources, a collaborative environment, and the freedom to make an impact? If you're ready to take the next step, discover what’s possible with LPL Financial. LPL Financial is seeking an AWS Data Engineer II to join our Enterprise Data Integration Framework (EDIF) team. In this role, you will be responsible for designing, developing, and maintaining robust and scalable data pipelines and integration solutions within our AWS cloud environment, contributing to the core infrastructure that powers our data-driven initiatives. The ideal candidate will be a strong collaborator who can quickly ramp up on EDIF’s event-driven workflows, AWS-centric architecture, microbatch ingestion patterns, and Glue-based transformation framework. This role partners closely with senior engineers, architects, analysts, and business stakeholders to maintain and evolve a platform that ensures reliable, timely enterprise data ingestion. What are we looking for? We’re looking for motivated and collaborative engineers who thrive in a dynamic, data-integration environment. Ideal candidates show strong analytical thinking, attention to detail, and an eagerness to learn the Enterprise Data Integration Framework quickly. We value teammates who pursue excellence, communicate clearly, adapt rapidly, and embody a culture where we win together, support each other, and deliver high-quality results for our customers. What are we looking for? We want strong collaborators who can deliver a world-class client experience. We are looking for people who thrive in a fast-paced environment, are client-focused, team oriented, and are able to execute in a way that encourages creativity and continuous improvement.

Requirements

  • Bachelor’s degree in Computer Science, Information Technology, Engineering, or related field.
  • 3+ years of hands-on experience in data engineering or ETL development.
  • Experience with core AWS data and compute services, including S3, Lambda, Glue, DynamoDB, PostgreSQL, Step Functions, CloudWatch, IAM, Athena.
  • Proficiency with Python and/or PySpark for data transformation.
  • Experience working with relational databases (PostgreSQL, SQL Server, Oracle, etc.).
  • Understanding of data modeling, schema evolution, and data validation principles.
  • Experience with Git-based version control and CI/CD workflows.
  • Strong ability to analyze data structures and implement efficient transformations.
  • Strong analytical, debugging, and problem-solving skills.
  • Excellent communication and teamwork skills.
  • Ability to work independently in a fast-paced environment.
  • Experience with both agile and waterfall methodologies.
  • Ability to meet tight deadlines while delivering high-quality results.
  • Continuous improvement mindset and strong ownership of platform reliability.
  • Ability to learn quickly and adapt to evolving ingestion requirements.

Nice To Haves

  • Experience with event-driven or serverless architectures.
  • Hands-on familiarity with S3 → Lambda → Glue ingestion pipelines.
  • Exposure to secure data-processing patterns (tokenization, encryption, PII governance).
  • Experience with mastering tools or vendor dataset onboarding.
  • Experience in financial services or regulated data environments.

Responsibilities

  • Develop, maintain, and enhance data ingestion pipelines within the Enterprise Data Integration Framework (EDIF).
  • Build and update AWS Glue ETL jobs in Python (PySpark) for validation, transformation, enrichment, and microbatch processing.
  • Collaborate across engineering, QA, Cloud Operations, and vendor partners to implement new ingestion workflows.
  • Contribute to architectural design, documentation, and best practices that improve EDIF scalability and resilience.
  • Monitor, troubleshoot, and optimize ingestion workflow performance across AWS services (S3, Lambda, Glue, DynamoDB, PostgreSQL, Step Functions, CloudWatch, EventBridge, Athena).
  • Assist with the onboarding of new vendor feeds, schemas, and operational schedules into EDIF.
  • Participate in platform release management, change control, and nightly batch support activities.
  • Maintain ingestion observability through CloudWatch dashboards and EDIF event-monitoring tools.
  • Provide technical support to ensure successful nightly data ingestion and data quality compliance.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service