Data Engineer / Sr Data Engineer (GCP, BigQuery)

Applied Systems, Inc.
112d$60,000 - $160,000

About The Position

Amazing Career Moments Happen Here. Transforming the insurance industry is ambitious, we know. That’s why at Applied, we’re building a team that shows up every day ready to learn, willing to try new things, and driven to deliver innovative software and services that make us indispensable to our customers – all within a culture built on values that make us indispensable to each other too. With 40+ years of experience in the insurtech game, we’re not just redefining what’s achievable, we’re creating a place where amazing career moments are made possible. EZLynx is seeking a skilled and motivated Data Engineer to join our growing data team. The ideal candidate will be responsible for designing and implementing data pipelines, building efficient ETL processes, managing our Looker instance to enable data-driven decision-making across the organization. In this new role, you will work closely with cross-functional teams to deliver scalable data solutions and integrate platforms such as SQL Server, BigQuery, and Looker. This is an exciting opportunity for someone who is passionate about leveraging data to drive business outcomes, enjoys solving complex problems, and thrives in a fast-paced, collaborative environment.

Requirements

  • 3+ years of experience with Google Big Query for data warehousing
  • Experience with GCP (Pub/Sub, Dataflow, Cloud functions, IAM)
  • Experience in Looker environments, including performance metrics and monitoring
  • Experience with CI/CD pipelines and version control systems (e.g., Git)
  • Proficiency in SQL, including stored procedures, windows functions, and query optimization
  • Proficiency in building and maintaining ETL pipelines to process large datasets
  • Knowledge of data modeling best practices (star/snowflake schemas, normalization)
  • Ability to work cross functionally in an agile, fast paced environment
  • Experience with scripting languages (Python, Bash, etc.); Go experience a plus
  • Proficiency with Kubernetes, Terraform, CI/CD tools (GitHub Actions, Jenkins, etc.)
  • Familiarity with CDC tools (Debezium, Kafka, etc.) and data lake architecture

Nice To Haves

  • 5+ years of additional experience with Google Big Query for data warehousing
  • Proven impact managing and enhancing Looker environments, including performance metrics and monitoring
  • Advanced knowledge of SQL and Python with the proven ability to optimize queries
  • Demonstrated ability to address complex problems by proposing solutions based on advanced knowledge of data lake architectures and technical considerations

Responsibilities

  • Design, build, and maintain scalable ETL pipelines using cloud native services (GCP, BigQuery, Pub/Sub, etc.)
  • Automate deployment and configuration using Terraform, Helm, and Kubernetes
  • Develop dashboards and other tooling around Looker performance metrics
  • Ensure Looker datasets are backed by performant and reliable queries
  • Diagnose and address performance issues in dashboard performance, LookML codebases, Explores, and derived tables
  • Build and maintain CI/CD pipelines for microservices and data workflows
  • Support and maintain custom tools for data processing and orchestration
  • Monitor and troubleshoot data lake operations and cloud resources
  • Design efficient and logical data models that align with business reporting needs
  • Write complex SQL queries for data extraction, transformation, and reporting
  • Collaborate with data engineers and analysts to deliver reliable, high quality data products

Benefits

  • Medical, Dental, and Vision Coverage
  • Holiday and Vacation Time
  • Health & Wellness Days
  • A Bonus Day for Your Birthday
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service