Principal Software Engineer

SoleraWestlake, OH
5d

About The Position

We're looking for a pragmatic, hands-on Senior Data Engineer who gets things done. You'll spend significant time building data pipelines and infrastructure while helping elevate the technical skills of the broader team. This role is ideal for someone who thrives on modernizing legacy data systems, leverages AI-assisted development tools to accelerate delivery, and isn't afraid to roll up their sleeves to architect and implement scalable data solutions. You'll balance individual contribution with mentorship, helping less experienced engineers grow their craft through practical guidance and code review.

Requirements

  • 15+ years of professional data engineering experience
  • 5+ years in a technical leadership position
  • Proven track record of modernizing legacy data systems and cloud migrations
  • Strong experience with AI-assisted development tools and workflows
  • History of mentoring and developing junior engineers
  • Hands-on technical involvement (not purely managerial)
  • Expert-level proficiency in Python and SQL
  • Deep experience with data warehousing solutions (Snowflake, Redshift, BigQuery, or Fabric)
  • Strong background in ETL/ELT design and optimization (Airflow, dbt, Glue, Informatica, or Talend)
  • Hands-on experience with big data technologies (Spark, Hadoop, Kafka)
  • Proficiency with cloud platforms: AWS (S3, RDS, Lambda, Glue) or Azure (Data Factory, Synapse, Databricks) or GCP (BigQuery, Dataflow)
  • Experience with both relational (PostgreSQL, MySQL, SQL Server, Oracle) and NoSQL databases (MongoDB, Cassandra, DynamoDB)
  • Solid understanding of containerization (Docker) and orchestration (Kubernetes)
  • Data modeling techniques (dimensional, relational, NoSQL, ER modeling, UML)
  • Database design and performance optimization
  • Batch and real-time pipeline architecture
  • Data lake and lakehouse implementation
  • Understanding of data enrichment, transformation, security, movement, and data integrity
  • Experience with data mesh/data fabric architectures
  • Subject matter expertise in migrating on-prem applications to cloud PaaS
  • Familiarity with AWS migration services (DMS, SMS)
  • Experience creating migration patterns for MSSQL, MySQL, Oracle, IBM DB2
  • Bias toward action and delivering working solutions
  • Strong communication skills with both technical and non-technical stakeholders
  • Ability to translate business requirements into technical solutions
  • Ability to manage multiple priorities and deliver results independently
  • Collaborative mindset with genuine interest in helping others grow

Nice To Haves

  • Master's degree in Computer Science, Engineering, or related field
  • Experience with Java development
  • Familiarity with Infrastructure as Code (Terraform, CloudFormation)
  • Knowledge of CI/CD tools (Jenkins, GitHub Actions, GitLab CI)
  • Experience with monitoring and observability tools (Datadog, Grafana)
  • Understanding of microservices architecture
  • Time-series database experience (InfluxDB, TimescaleDB)
  • Knowledge of Apache Iceberg or Hive

Responsibilities

  • Design and implement scalable data pipelines for both batch and real-time processing
  • Modernize legacy on-premises data systems and migrate to cloud-based PaaS solutions
  • Leverage AI-powered development tools (GitHub Copilot, ChatGPT, Claude, etc.) to accelerate feature development and data engineering workflows
  • Architect end-to-end data solutions from collection and storage through modeling and consumption
  • Guide multi-terabyte database migrations across different data-tier technologies
  • Build data-intensive applications with APIs and streaming data pipelines
  • Mentor data engineers through pairing sessions, code reviews, and practical guidance
  • Share best practices for AI-assisted development and modern data engineering tooling
  • Establish coding standards, technical documentation, and architectural patterns
  • Foster a culture of continuous learning and technical excellence
  • Create conceptual and logical data models, including source-to-target mappings and data lineage
  • Design solutions to prepare and transform data for multi-platform consumption
  • Implement data quality controls and monitoring systems
  • Optimize data models and database designs for performance and reliability
  • Build and maintain cloud-based data infrastructure (AWS/Azure/GCP)
  • Ensure data security, compliance, and governance standards are met
  • Evaluate and recommend new data technologies that solve real problems
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service