About the position
Fetch is seeking a data engineer to join their fast-growing team. The role involves using the latest technology to build a performant, reliable, and scalable platform for delivering data. The ideal candidate should have Python programming skills, solid SQL skills, and experience with various data stores and streaming data. They should also have an interest in experimenting with different tools and collaborating with other teams within the organization. A bachelor's degree in Computer Science or equivalent is required.
Responsibilities
- Build a performant, reliable, and scalable platform for delivering data using the latest technology
- Enable stakeholders to access and use endless amounts of data from various sources
- Make data processing seamless and effortless for both producers and consumers of data
- Ensure world-class data availability with terabytes of daily data
- Utilize Python programming skills and solid SQL skills
- Familiarity with Unix systems, shell scripting, and Git
- Experience with relational and non-relational data stores (e.g., Snowflake, MongoDB, S3, HDFS, Postgres, Redis, DynamoDB)
- Work with streaming data in Kafka and Flink
- Interest in experimenting with different tools and technologies and sharing learnings with the organization
- Collaborate with other teams (e.g., Development, Business Intelligence, Data Science) to build tools and solutions for managing data within the Fetch ecosystem
- Bachelor's degree in Computer Science or equivalent
- Excellent written and verbal communication skills (bonus)
- Familiarity with open source software and dependency management (bonus)
- Experience with ETL process, data pipeline, and/or microservice development (bonus)
- Cloud engineering and DevOps skills (e.g., AWS, CloudFormation, Docker) (bonus)
- Familiarity with messaging and asynchronous technologies (e.g., SQS, Kinesis, RabbitMQ, Kafka) (bonus)
Requirements
- Python programming skills
- Solid SQL skills
- Familiarity with Unix systems, shell scripting, and Git
- Experience with relational (SQL), non-relational (NoSQL), and/or object data stores (e.g., Snowflake, MongoDB, S3, HDFS, Postgres, Redis, DynamoDB)
- Experience working with streaming data in Kafka and Flink
- Interest in building and experimenting with different tools and tech, and sharing your learnings with the broader organization
- The desire to work with other teams in the organization (e.g., Development, Business Intelligence, Data Science) to build tools and solutions that support and help manage data within the Fetch ecosystem
- Bachelor’s degree in Computer Science (or equivalent)
- Excellent written and verbal communication skills (bonus)
- Familiarity with open source software and dependency management (bonus)
- ETL process, data pipeline, and/or microservice development experience (bonus)
- Cloud engineering and DevOps skills (e.g., AWS, CloudFormation, Docker) (bonus)
- Familiarity with messaging and asynchronous technologies (e.g., SQS, Kinesis, RabbitMQ, Kafka) (bonus)
Benefits
- Stock Options for everyone
- 401k Match: Dollar-for-dollar match up to 4%.
- Comprehensive medical, dental, and vision plans for everyone including pets
- Ten Thousand per year in education reimbursement
- Employee Resource Groups fostering a diverse and inclusive workplace
- Paid Time Off and 9 paid holidays, including Juneteenth and Indigenous People’s Day
- Robust Leave Policies: 18 weeks of paid parental leave for primary caregivers, 12 weeks for secondary caregivers, and a flexible return to work schedule
- Hybrid Work Environment: Collaborate in stunning offices or work from home with provided hardware and software.