Renewables Data Analyst

Deriva EnergyCharlotte, NC
Hybrid

About The Position

Deriva Energy is a leading Independent Power Producer in the US renewables market, with over 6 GW of operating or under construction wind, solar and storage projects across the country. Deriva’s ultimate parent is Brookfield Renewable and is poised for dynamic growth across its wind, solar, and storage portfolios. Join a dynamic team committed to excellence and innovation who envision a future of energy independence using resilient, carbon-free generation. We offer competitive compensation, comprehensive benefits, and the opportunity to make a significant impact in the rapidly evolving renewable energy industry. Deriva empowers customers with innovative clean energy solutions that strengthen communities and serve future generations. At Deriva Energy, our mission is to advance the transition to a sustainable, clean energy future and create value for our customers, investors, and other stakeholders. We own and operate one of the largest renewable generation portfolios in the United States — 25 wind sub-fleets, 75 solar sub-fleets, and battery storage assets across 23 states — and we design, build, and run the systems that turn raw SCADA signals into reliable, reportable generation data that powers financial reporting, contractual data requirements, and operational decisions across a multi-GW fleet. This position is located at Deriva Energy's Headquarters located in Charlotte, NC. About the Role We're hiring a Renewables Data Analyst II to own the monthly generation data validation and correction process across our entire renewable fleet. You'll partner directly with our Performance Engineering team to curate site-level data integrity and extend our Databricks automation roadmap to replace manual workflows with scalable pipelines and AI-assisted tooling This role sits at the center of three things: reliable data, trusted relationships with Performance Engineers, and a growing automation platform. Every correction you make flows directly to performance and financial reporting, variance analysis, and contractual data requirements. If you enjoy being the person who connects field reality to data truth — and who builds the tooling that makes that connection scale — this is your role

Requirements

  • Bachelor's degree in information technology, Engineering, Business, or another technical field
  • One (1) year or more of related work experience in data analysis, data engineering, or a similar role
  • In lieu of a bachelor's degree, HS/GED AND six (6) years or more of relevant work experience
  • Demonstrated proficiency in SQL and Python (PySpark preferred) with the ability to write and debug data transformations fluently
  • Experience working with time series data (10-minute intervals, hourly rollups, monthly aggregations)
  • Comfort with messy, inconsistent data across multiple source systems and the judgment to investigate discrepancies
  • Strong attention to detail under volume
  • Ability to work independently across data systems and operational processes.
  • Demonstrated interpersonal skills with the ability and willingness to partner with multiple other groups throughout the organization
  • Hands-on experience with Databricks / Delta Lake / PySpark and medallion architecture or incremental pipeline design
  • SCADA / OSIsoft PI experience — PI tags, backfill, Event Frames
  • Renewable energy operations exposure (turbine availability, curtailment, outage flags, scheduled downtime)
  • Track record replacing manual workflows with automation — including willingness to adopt AI-assisted tooling (LLMs, anomaly detection, automated triage) for data quality workflows
  • PowerBI (DAX measures, dashboard creation) and experience building internal tools or lightweight apps
  • Demonstrated oral and written communication, organizational, presentation, and facilitation skills
  • Two (2) years or more of renewable operational or utility experience

Nice To Haves

  • Master's degree in Information Technology, Engineering, Business, or another technical field
  • Three (3) years or more of related work experience

Responsibilities

  • Reporting — Generation Data Validation: Run the monthly 5-business-day reporting cycle end-to-end, reconciling generation figures across PI Connect (SCADA), PowerOptix, and NetSuite at generating unit and plant granularity.
  • Process PI edits and validate downstream propagation to PowerBI and Tab Model refreshes.
  • Validate KPI impact and support variance analysis at month-end.
  • Author and maintain Databricks Lakeview dashboards and PySpark notebooks used to produce monthly metrics and reporting deliverables.
  • Data Quality Control — Partnership with Performance Engineers: Partner directly with the Performance Engineering team (5–6 PSEs) to curate data integrity: site meter power validation, meteorological sensor review, automated data substitution investigation, outage categorization, and data pipeline validity monitoring.
  • Serve as the liaison between the PE team and IT — translating field observations into structured data corrections and system fixes.
  • Design and document QC measures within Databricks (Unity Catalog, Delta tables); test and approve system changes or enhancements affecting data quality and reporting.
  • Maintain the data-quality narrative for each month's reporting cycle — what changed, why, and what it means for KPIs.
  • Process Improvement — AI-Assisted Automation: Build and extend Databricks PySpark / SQL pipelines using medallion architecture and incremental processing to replace manual validation workflows.
  • Contribute to an active automation roadmap: PI Tag Flatline Detector, Monthly Metrics Pipeline, Meteorological Sensor Monitoring, Bat Curtailment Compliance, Poseidon Status.
  • Adopt AI-assisted tooling (LLM-based anomaly triage, automated exception narratives, assisted root-cause investigation) where it reduces manual load without sacrificing data integrity.
  • Identify and pilot new technologies that improve data collection, validation throughput, and analyst productivity.
  • Business & Technical Knowledge: Maintain fluency in all Renewables Operations applications: PI Connect / PI Vision / PI Asset Framework, Databricks (Unity Catalog, Lakeview), PowerBI, PowerOptix, NetSuite, OneNote Monthly Report, JIRA.
  • BI Tool Administration & Training: Administer and train end-users on existing BI and reporting tools, including Databricks dashboards and PowerBI reports.
  • Other / Project Support: Ad-hoc project work ensuring data integrity and compliance to business processes and procedures.

Benefits

  • Competitive compensation
  • Comprehensive benefits
  • Health Insurance
  • Dental Insurance
  • Vision Insurance
  • 401(k) with matching
  • Employee assistance program
  • Flexible spending account
  • Life insurance
  • Paid time off
  • Parental leave
  • Attractive Bonus Potential
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service