Software Engineer Lead

CGIPittsburgh, PA
Onsite

About The Position

This role will require someone at our client site 5 days a week in Pittsburgh, PA, Cleveland, OH, Birmingham, AL, or Dallas, TX. We are seeking highly skilled and client-focused Lead Software Engineer to design, build, and optimize a real-time analytics solution. This role requires deep expertise in building real-time analytics using Kafka Streaming, ETL development, Hadoop, SQL, along with the ability to collaborate closely with clients, stakeholders, and cross-functional teams. The ideal candidate combines strong technical proficiency with excellent communication skills, enabling them to translate business requirements into robust data solutions.

Requirements

  • 10+ years of hands-on experience in data engineering roles
  • Strong experience in building streaming pipeline
  • Hands-on expertise with Hadoop ecosystem, Kafka, Flink, Spark Streaming and SQL
  • Scala and Python
  • Oracle Database, SQL, PL/SQL
  • Familiarity with CI/CD tools (Git, Bitbucket, uDeploy, ServiceNow)
  • Soft Skills & Client-Facing Capabilities
  • Strong verbal and written communication skills
  • Ability to explain complex technical concepts to non-technical stakeholders
  • Proven experience working in client-facing or consulting roles
  • Strong problem-solving and analytical thinking
  • Ability to manage multiple priorities in a fast-paced environment
  • Collaborative mindset with leadership capabilities

Nice To Haves

  • Experience in building enterprise-scale data platforms
  • Exposure to cloud-based data ecosystems (AWS, Azure, or GCP)
  • Knowledge of data governance, security, and compliance practices
  • Experience mentoring junior team members
  • thinks beyond code and understands business impact
  • can lead conversations and not just follow requirements
  • balances technical depth with strong interpersonal skills
  • comfortable owning solutions end-to-end

Responsibilities

  • Design and implement low-latency, high-throughput streaming pipelines using Kafka, Flink, and Spark Streaming.
  • Build and maintain event-driven architectures for real-time data ingestion and processing.
  • Develop stateful and stateless stream processing applications using Flink and Spark Structured Streaming.
  • Collaborate closely with various application teams to define integration points and API specification.

Benefits

  • Competitive compensation
  • Comprehensive insurance options
  • Matching contributions through the 401(k) plan and the share purchase plan
  • Paid time off for vacation, holidays, and sick time
  • Paid parental leave
  • Learning opportunities and tuition assistance
  • Wellness and Well-being programs
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service