Application Developer, Portland Oregon

Portland General Electric CompanyPortland, OR
$148,387

About The Position

At PGE, our work involves dreaming about, planning for, and realizing a smarter, cleaner, more enduring Oregon neighborhood. Its core to our DNA and we haven’t stopped since we started in 1888. We energize lives, strengthen communities and drive advancements in energy that promote social, economic and environmental progress. We’re always on the lookout for people passionate about leading and being a part of teams that are advancing innovative clean energy solutions that are also affordable and accessible to all. SUMMARY Portland General Electric Co. seeks Application Developer to work in Portland, OR RESPONSIBILITIES The Application Developer provides advanced data engineering services for complex, large-scale data systems in a fast-paced, innovative environment. This role involves designing, developing, and optimizing scalable data pipelines, ETL processes, and data warehousing solutions using AWS services and Snowflake. Responsibilities include coding, testing, debugging, and documenting complex data processing applications using Python, ensuring high data quality and integrity, and integrating diverse data sources to meet complex data requirements. The candidate will also design and implement intricate data models and algorithms using advanced technologies such as AWS Glue, AWS Lambda, and Snowflake's unique features. They will utilize tools like AWS CloudWatch and Snowflake's query profiler to optimize query performance and generate analytics reports. The role requires recommending and designing innovative data architectures that ensure scalability, security, and seamless integration with existing systems. This position also involves translating conceptual data models into efficient physical designs in Snowflake, producing detailed technical documentation, and implementing data governance policies to ensure compliance with data privacy regulations using tools like Snowflake’s role-based access control (RBAC) and column-level security. The candidate will define and manage complex data integration processes, collaborating with cross-functional teams to align data solutions with business objectives. Responsibilities also include configuring and optimizing cloud-based data environments using AWS S3 and Snowflake, creating rigorous integration test plans, and conducting thorough performance testing. In addition, the candidate will lead troubleshooting efforts for critical data issues and implement strategic solutions to prevent recurrence. This includes developing and maintaining Python based ETL jobs for data reconciliation, implementing robust data pipelines using Snowflake tasks and AWS Step Functions, creating efficient stored procedures in Snowflake, and configuring AWS Glue jobs for data cleansing and automation. The role involves streamlining the software development lifecycle for data solutions using Jenkins pipelines for CI/CD.

Requirements

  • Master’s degree in computer science, Data Science, Software Engineering, or a related field (or foreign equivalent).
  • 3 plus years’ experience in Snowflake: Advanced SQL, stored procedures, virtual warehouses, storage layers, Time Travel, and Zero-Copy Cloning.
  • 3 plus years hands on experience in AWS Cloud Platform: S3, Glue, Lambda, Step Functions, and IAM.
  • 3 plus years hands on experience in Python and ELT platforms like Matillion
  • 3 plus years’ experience in Version Control & CI/CD: Git, Jenkins
  • 3 plus years’ experience with Data Governance: Snowflake's RBAC and column-level security.
  • 3 plus years’ experience with Real-Time Data Streaming: AWS Kinesis, Kafka.
  • 3 plus years’ experience with Optimization Techniques: Large-scale data processing in Snowflake and AWS.
  • Experience in utility data operations, including customer, asset, and resource data workflows, and the ability to implement tailored solutions to address industry-specific challenges

Responsibilities

  • designing, developing, and optimizing scalable data pipelines, ETL processes, and data warehousing solutions using AWS services and Snowflake
  • coding, testing, debugging, and documenting complex data processing applications using Python, ensuring high data quality and integrity, and integrating diverse data sources to meet complex data requirements
  • design and implement intricate data models and algorithms using advanced technologies such as AWS Glue, AWS Lambda, and Snowflake's unique features
  • utilize tools like AWS CloudWatch and Snowflake's query profiler to optimize query performance and generate analytics reports
  • recommending and designing innovative data architectures that ensure scalability, security, and seamless integration with existing systems
  • translating conceptual data models into efficient physical designs in Snowflake, producing detailed technical documentation, and implementing data governance policies to ensure compliance with data privacy regulations using tools like Snowflake’s role-based access control (RBAC) and column-level security
  • define and manage complex data integration processes, collaborating with cross-functional teams to align data solutions with business objectives
  • configuring and optimizing cloud-based data environments using AWS S3 and Snowflake, creating rigorous integration test plans, and conducting thorough performance testing
  • lead troubleshooting efforts for critical data issues and implement strategic solutions to prevent recurrence
  • developing and maintaining Python based ETL jobs for data reconciliation, implementing robust data pipelines using Snowflake tasks and AWS Step Functions, creating efficient stored procedures in Snowflake, and configuring AWS Glue jobs for data cleansing and automation
  • streamlining the software development lifecycle for data solutions using Jenkins pipelines for CI/CD
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service