This job is closed
We regret to inform you that the job you were interested in has now been closed. Although this specific position is no longer available, we encourage you to continue exploring other opportunities on our job board.
About the position
As a Principal Data Engineer at MedeAnalytics, you will play a crucial role in developing and overseeing the company's data product build and operations. You will be responsible for building data pipelines, managing the data lake, and enabling analytics, visualization, machine learning, and product development efforts. Additionally, you will be involved in the development of large and complex data applications in public cloud environments. This role requires experience in AWS-based data initiatives and a hands-on approach. You will collaborate with various stakeholders and work in a hybrid environment with both on-premises and cloud systems.
- Maintain a predictable, transparent, global operating rhythm for data access
- Collect, transport, maintain, and curate data in the Data Lake/data repository
- Centralize and standardize data for use by business, data science, and other stakeholders
- Increase awareness and democratize access to available data
- Contribute to code development in projects and services
- Manage and scale data pipelines from internal and external sources
- Build automation and monitoring frameworks for data pipeline quality and performance
- Implement best practices for systems integration, security, performance, and data management
- Create value through increased adoption of data, data science, and business intelligence
- Collaborate with internal clients for solutioning and POC discussions
- Evolve the architectural capabilities and maturity of the data platform
- "Productionalize" data science models and manage SLAs for data products and processes
- Support large-scale experimentation by data scientists
- Prototype new approaches and build solutions at scale
- Research state-of-the-art methodologies
- Create documentation for learnings and knowledge transfer
- Create and audit reusable packages or libraries
- Bachelor's degree preferred; experience in the healthcare space is a plus
- Fluent with AWS cloud services; AWS Certification is a plus; Cloud Data Lake experience required
- Apache Iceberg/Delta Lake required
- 6+ years of overall technology experience, including 4+ years of hands-on software development, data engineering, and systems architecture
- 4+ years of experience with Data Lake Infrastructure, Data Warehousing, and Data Analytics tools, including Snowflake
- 4+ years of experience in SQL optimization and performance tuning, and development experience in programming languages like Python, PySpark, Scala, etc.
- 2+ years of cloud data engineering experience in Oracle OCI/AWS
- Experience with integration of multi-cloud services
- Incredible Medical, Dental, Vision benefits - Effective on the first of the month after your start
- FREE single healthcare coverage!!!
- Company paid Basic Life & AD&D Insurance, STD/LTD
- ROBUST Employee Assistance Program (EAP)
- 401k with company match
- 9 paid holidays AND 3 floating holidays = 12 total!
- Paid time off accrual
- Employee Referral Bonus
- Professional Development
- and more!
Dev & Engineering
This is some text inside of a div block.