Senior Data Engineer

Velocity StaffOverland Park, KS
5h

About The Position

Velocity Staff, Inc. is working with our client located in the Overland Park, KS area to identify a Senior Level Data Engineer to join their Data Services Team. The right candidate will utilize their expertise in data warehousing, data pipeline creation/support and analytical reporting and be responsible for gathering and analyzing data from several internal and external sources, designing a cloud-focused data platform for analytics and business intelligence, reliably providing data to our analysts. This role requires significant understanding of data mining and analytical techniques. An ideal candidate will have strong technical capabilities, business acumen, and the ability to work effectively with cross-functional teams.

Requirements

  • Bachelor’s degree in computer science, data science or related technical field, or equivalent practical experience
  • Proven experience with relational and NoSQL databases (e.g. Postgres, Redshift, MongoDB, Elasticsearch)
  • Experience building and maintaining AWS based data pipelines: Technologies currently utilized include AWS Lambda, Docker / ECS, MSK
  • Mid/Senior level development utilizing Python: (Pandas/Numpy, Boto3, SimpleSalesforce)
  • Experience with version control (git) and peer code reviews
  • Enthusiasm for working directly with customer teams (Business units and internal IT)

Nice To Haves

  • Experience with data processing and analytics using AWS Glue or Apache Spark
  • Hands-on experience building data-lake style infrastructures using streaming data set technologies (particularly with Apache Kafka)
  • Experience data processing using Parquet and Avro
  • Experience developing, maintaining, and deploying Python packages
  • Experience with Kafka and the Kafka Connect ecosystem.
  • Familiarity with data visualization techniques using tools such as Grafana, PowerBI, AWS Quick Sight, and Excel.

Responsibilities

  • Work with Data architects to understand current data models, to build pipelines for data ingestion and transformation.
  • Design, build, and maintain a framework for pipeline observation and monitoring, focusing on reliability and performance of jobs.
  • Surface data integration errors to the proper teams, ensuring timely processing of new data.
  • Provide technical consultation for other team members on best practices for automation, monitoring, and deployments.
  • Provide technical consultation for the team with “infrastructure as code” best practices: building deployment processes utilizing technologies such as Terraform or AWS Cloud Formation.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service