We are hiring on behalf of our client who is a technical infrastructure firm specializing in the delivery of massive-scale web data to organizations developing advanced artificial intelligence models. The organization supports high-capacity bandwidth-sharing networks and operates a distributed crawler capable of accessing high-quality public web data at a global scale. Additionally, the team has engineered sophisticated pipelines for the ingestion, segmentation, and annotation of billions of multimedia files, facilitating dataset creation for frontier research labs. The organization operates as a lean, technical team that prioritizes speed and direct execution. As a Research Crawling Engineer, the successful candidate will design and operate large-scale web data acquisition systems. This role encompasses distributed systems, scraping infrastructure, and data pipelines, focusing on providing high-quality inputs for research and model development.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Senior
Education Level
No Education Listed