Develop scalable data pipelines and build out new API integrations to support continuing increases in data volume and complexity. Own and deliver Projects and Enhancements associated with Data platform solutions. Develop solutions using PySpark/EMR, SQL and databases, AWS Athena, S3, Redshift, AWS APIT Gateway, Lambda, Glue, and other Data Engineering technologies. Write Complex Queries and edit them as required for implementing ETL/Data solutions. Implement solutions using AWS and other cloud platform tools, including GitHub, Jenkins, Terraform, Jira, and Confluence. Follow agile development methodologies to deliver solutions and product features by following DevOps, Data Ops, and Dev Sec Ops practices. Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, and re-designing infrastructure for greater scalability. Travel: Up to 5% travel required (domestic and international). Can work remotely or telecommute.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Mid Level
Number of Employees
5,001-10,000 employees