Senior Software Engineer (Cloud & Data Platforms)

Customized Energy SolutionsPhiladelphia, PA

About The Position

Customized Energy Solutions (CES) is a global energy services and technology company that helps market participants operate, comply, and compete in deregulated electricity and natural gas markets. Founded in 1998 and headquartered in Philadelphia, CES works with utilities, independent power producers, energy suppliers, developers, asset owners, and investors across North America and globally. CES delivers market intelligence, regulatory and market design support, asset and portfolio management, retail market operations, and energy technology solutions. Our teams track and interpret ISO/RTO rules and policy developments, support resource planning and market participation, manage operational and settlement processes, and develop proprietary software platforms—including CES BLUE, GOLD, RED, GRIDBOOST, and CoMETS—that help clients manage risk, optimize performance, and respond effectively to market change. CES is committed to advancing transparent, efficient, and non-discriminatory energy markets while delivering practical, high-quality solutions marked by integrity, rigor, and long-term client value. CES has been nationally and regionally recognized for sustained growth and innovation, including listings on the Inc. 500|5000 and Philadelphia Business Journal’s Top 100 Companies, as well as a Best Places to Work designation with Hall of Fame status for five or more consecutive years. With headquarters in Philadelphia and offices across the U.S., Canada, Japan, India, and Vietnam, CES offers a collaborative, flexible, and globally connected work environment for professionals passionate about the future of energy. We are seeking a talented Software Developer to join our engineering team. The ideal candidate will take end-to-end ownership of applications and systems, bridging the gap between infrastructure and feature development. You will build and maintain cloud-native solutions using modern AWS services and Databricks, with a focus on scalability, reliability, and operational excellence. This role combines development expertise with operational accountability, you own the code, the infrastructure, and the impact.

Requirements

  • Bachelor’s degree in computer science, Software Engineering, or related field (or equivalent professional experience).
  • 3+ years of professional backend software development experience.
  • Strong proficiency in Python and SQL for building scalable applications and querying large datasets.
  • Hands-on experience with AWS services: Lambda, ECS, S3, DynamoDB, RDS, and IAM best practices.
  • Experience with Databricks, Delta Lake, or Apache Spark for building data pipelines and analytics solutions.
  • Proficiency in Infrastructure-as-Code using Terraform or CloudFormation for managing cloud infrastructure.
  • Experience with modern database technologies: MongoDB Atlas, DynamoDB, and relational databases (RDS, PostgreSQL, MySQL).
  • Solid understanding of API design, RESTful services, and real-time communication patterns (MQTT, message queues, and Kafka).
  • Demonstrated ability to take ownership of systems and drive projects from conception through production.
  • Excellent troubleshooting, problem-solving, and analytical skills.
  • Strong communication and collaboration skills; ability to work effectively in distributed teams.

Nice To Haves

  • Familiarity with LLM frameworks and RAG (Retrieval Augmented Generation) architecture.
  • Ability to use code assistant tools such as
  • Experience with CI/CD pipelines, GitOps workflows, and DevOps practices.
  • Knowledge of data governance, Delta Lake, or Unity Catalog for managing data assets.
  • Exposure to observability and monitoring tools (Elastic, Datadog, CloudWatch).
  • Experience with distributed systems, eventual consistency, and high-availability architecture patterns.
  • Track record of mentoring junior engineers and establishing team best practices.

Responsibilities

  • Design and build scalable backend services leveraging AWS Lambda, ECS, and Copilot for deployment and orchestration.
  • Architect data pipelines and analytics solutions using Databricks, Unity Catalog, and Apache Spark to process large-scale energy market and operational data.
  • Design and implement data storage strategies using S3 for data lakes, DynamoDB for high-performance NoSQL workloads, and RDS/MongoDB Atlas for relational and document databases.
  • Implement Infrastructure-as-Code (Terraform) to manage AWS resources, database clusters, and cloud infrastructure reproducibly across dev/qa/uat/prod environments.
  • Own the full lifecycle of assigned solutions: from design through production support, including monitoring, alerting, and incident response.
  • Proactively identify and resolve production issues, conducting root cause analysis and implementing preventive measures.
  • Leverage observability stacks (e.g., Elastic) and AWS CloudWatch to monitor application and data pipeline performance, set up dashboards, and maintain system up time.
  • Establish and maintain operational runbooks, alerting policies, and SLAs for systems under your ownership.
  • Work closely with cross-functional teams (product, data engineering, platform teams) to deliver integrated solutions.
  • Establish best practices for development, testing, and deployment, advocate for process improvements and tooling enhancements.
  • Participate in design reviews, code reviews, and architectural discussions to maintain high standards of code quality and system design.
  • Share knowledge with the team through documentation, pair programming, and knowledge-sharing sessions.

Benefits

  • competitive salary commensurate with experience
  • performance bonus
  • profit-sharing
  • Medical Savings Account
  • comprehensive health insurance
  • disability insurance
  • life insurance
  • 401k matching
  • tuition reimbursement
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service