About The Position

We are seeking a highly motivated and innovative Senior Applied Scientist to join our research team, focused on advancing agentic AI systems for decision automation, knowledge gathering, and organizational intelligence. In this role, you will work at the intersection of AI agents, large language models, knowledge graphs, and causal reasoning to design and prototype next-generation systems that move beyond search and static analytics toward adaptive, long-horizon decision-making agents. Your work will contribute to building knowledge engines; dynamic, evolving systems that unify structured and unstructured data, capture tacit organizational knowledge, and provide grounded context for autonomous and semi-autonomous agents operating at enterprise scale.

Requirements

  • Ph.D. in Computer Science, Computer Vision, Machine Learning, or a related discipline; or Master’s degree with 2+ years of experience leading applied research or product-focused CV/ML projects.
  • Expertise in modern computer vision architectures (e.g., ViT, SAM, CLIP, BLIP, DETR, or similar).
  • Experience with Vision-Language Models (VLMs) and multimodal AI systems.
  • Strong background in real-time video analysis, including event detection, motion analysis, and temporal reasoning.
  • Experience with transformer-based architectures, multimodal embeddings, and LLM-vision integrations.
  • Proficiency in Python and deep learning libraries like PyTorch or TensorFlow, OpenCV
  • Strong problem-solving skills, with a track record of end-to-end ownership of applied ML/CV projects.
  • Excellent communication and collaboration skills, with the ability to work in cross-functional teams.

Nice To Haves

  • Experience with cloud platforms (AWS, Azure) and deployment frameworks (ONNX, TensorRT) is a plus.

Responsibilities

  • Design and build state-of-the-art computer vision systems with a focus on real-time video analytics, video summarization, object tracking, and activity recognition.
  • Develop and apply Vision-Language Models (VLMs) and multimodal transformer architectures for deep semantic understanding of visual content.
  • Build scalable pipelines for processing high-volume, high-resolution video data, integrating temporal modeling and context-aware inference.
  • Apply self-supervised, zero-shot, and few-shot learning techniques to enhance model generalization across varied video domains.
  • Explore and optimize LLM prompting strategies and cross-modal alignment methods for improved reasoning over vision data.
  • Collaborate with product and engineering teams to integrate vision models into production systems with real-time performance constraints.
  • Contribute to research publications, patents, and internal IP assets in the area of vision and multimodal AI.
  • Provide technical mentorship and leadership to junior researchers and engineers.

Benefits

  • Competitive Salary: Aligned with experience and market standards
  • Comprehensive Insurance: Health, dental, and vision coverage for you and your family
  • 401(k) Plan: Build your financial future with our retirement savings plan
  • Flexible PTO & Hybrid Work: Take time off when needed and enjoy remote flexibility per company guidelines
  • Growth & Development: Access professional learning opportunities and career advancement support
  • Onsite Perks: Enjoy catered lunches, snacks, and a fully stocked kitchen
  • Team Bonding: Company-sponsored happy hours and social events to connect and unwind

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Senior

Education Level

Ph.D. or professional degree

Number of Employees

101-250 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service