Iron EagleX-posted about 1 month ago
Full-time • Mid Level
Onsite • Arlington, VA
51-100 employees

We are seeking a Data Engineering SME to design, build, and operate data pipelines that ingest, store, and process high-volume, multi-source data primarily for modern AI/ML processes. You will partner with software, analytics, and product teams to create model-ready datasets (features, embeddings, and prompts), implement scalable storage layers (data lakehouse and vector stores), and enable low-latency retrieval for query, inference, and RAG. Responsibilities include orchestrating streaming and batch pipelines, optimizing compute for GPU/CPU workloads, enforcing data quality and governance, and instrumenting observability. This role is ideal for someone passionate about turning raw data into reliable, performant inputs for AI models and other analytics while right-sizing technologies and resources for scale and speed. This is an onsite position in Crystal City, VA.

  • Design, develop, and implement scalable data pipelines and ETL processes using Apache Airflow, with a focus on data for AI applications.
  • Develop messaging solutions utilizing Kafka to support real-time data streaming and event-driven architectures.
  • Build and maintain high-performance data retrieval solutions using ElasticSearch/OpenSearch.
  • Implement and optimize Python-based data processing solutions.
  • Integrate batch and streaming data processing techniques to enhance data availability and accessibility.
  • Ensure adherence to security and compliance requirements when working with classified data.
  • Work closely with cross-functional teams to define data strategies and develop technical solutions aligned with mission objectives.
  • Deploy and manage cloud-based infrastructure to support scalable and resilient data solutions.
  • Optimize data storage, retrieval, and processing efficiency.
  • Experience with Apache Airflow for workflow orchestration.
  • Strong programming skills in Python.
  • Experience with ElasticSearch/OpenSearch for data indexing and search functionalities.
  • Understanding of vector databases, embedding models, and vector search for AI applications.
  • Expertise in event-driven architecture and microservices development.
  • Hands-on experience with cloud services (e.g. MinIO), including data storage and compute resources.
  • Strong understanding of data pipeline orchestration and workflow automation.
  • Working knowledge of Linux environments and database optimization techniques.
  • Strong understanding of version control with Git.
  • Due to US Government Contract Requirements, only US Citizens are eligible for this role.
  • An active TS/SCI security clearance is REQUIRED, and candidates must have or be willing to obtain a CI Poly. Candidates without this clearance will not be considered.
  • Proficiency in Kafka for messaging and real-time data processing.
  • Understanding of LLM prompt engineering and associated ETL applications.
  • Knowledge of SuperSet for data visualization and analytics.
  • Familiarity with Kubernetes for container orchestration.
  • Exposure to Apache Spark for large-scale data processing.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service