Research Crawling Engineer

MLabsNew York, NY
Remote

About The Position

We are hiring on behalf of our client who is a technical infrastructure firm specializing in the delivery of massive-scale web data to organizations developing advanced artificial intelligence models. The organization supports high-capacity bandwidth-sharing networks and operates a distributed crawler capable of accessing high-quality public web data at a global scale. Additionally, the team has engineered sophisticated pipelines for the ingestion, segmentation, and annotation of billions of multimedia files, facilitating dataset creation for frontier research labs. The organization operates as a lean, technical team that prioritizes speed and direct execution. As a Research Crawling Engineer, the successful candidate will design and operate large-scale web data acquisition systems. This role encompasses distributed systems, scraping infrastructure, and data pipelines, focusing on providing high-quality inputs for research and model development.

Requirements

  • Extensive programming experience in one or more of the following: Go, Rust, Python, Java, or C++ .
  • Proven experience in building web crawlers or large-scale data pipelines.
  • Solid understanding of HTTP, networking protocols, and browser behavior.
  • Familiarity with distributed systems and parallel processing techniques.
  • Experience handling large datasets, ideally at the terabyte to petabyte scale .
  • Demonstrated ability to debug and maintain systems within unstable or adversarial environments.

Nice To Haves

  • Experience with NLP pipelines or dataset curation for machine learning.
  • Familiarity with LLM pre-training data or retrieval systems.
  • Practical experience with headless browsers (e.g., Playwright, Puppeteer, or Chrome DevTools Protocol).
  • Knowledge of proxy systems, IP rotation, and large-scale request orchestration.
  • Background in data quality evaluation or benchmarking.
  • Experience running workloads on cloud or bare-metal infrastructure.

Responsibilities

  • Construct and maintain large-scale web crawlers across diverse domains.
  • Design high-throughput, fault-tolerant systems for data collection, managing volumes ranging from millions to billions of URLs per day.
  • Navigate anti-bot systems, rate limits, and dynamic, JavaScript-heavy websites.
  • Develop robust pipelines for data cleaning, deduplication, filtering, and normalization.
  • Build and maintain datasets specifically structured for research and machine learning model training.
  • Monitor and optimize crawl performance, coverage, and data quality through rapid iteration.
  • Collaborate with research teams to ensure data collection efforts align with modeling requirements.
  • Optimize infrastructure to ensure cost-efficiency, low latency, and reliability.

Benefits

  • Contribute to the development of a web-scale crawler and knowledge graph at the forefront of AI data accessibility.
  • Join a lean, low-ego team that prioritizes high output and professional growth.
  • This position is part of a fully remote team, offering flexibility and autonomy.
  • A package including a competitive salary, comprehensive benefits, and equity, commensurate with experience and the ability to operate at scale.
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service