Technical Intern-2

Centific
$40

About The Position

Are you pushing the frontier of computer vision, multimodal large models, and embodied/physical AI—and have the publications to show it? Join us to translate cutting‑edge research into production systems that perceive, reason, and act in the real world. The Mission We are building state‑of‑the‑art Vision AI across 2D/3D perception, egocentric/360° understanding, and multimodal reasoning. As a Ph.D. Research Intern, you will own high‑leverage experiments from paper → prototype → deployable module in our platform.

Requirements

  • Ph.D. student in CS/EE/Robotics (or related), actively publishing in CV/ML/Robotics (e.g., CVPR/ICCV/ECCV, NeurIPS/ICML/ICLR, CoRL/RSS).
  • Strong PyTorch (or JAX) and Python; comfort with CUDA profiling and mixed‑precision training.
  • Demonstrated research in computer vision and at least one of: VLMs (e.g., LLaVA‑style, video‑language models), embodied/physical AI, 3D perception.
  • Proven ability to move from paper → code → ablation → result with rigorous experiment tracking.

Nice To Haves

  • Experience with video models (e.g., TimeSFormer/MViT/VideoMAE), diffusion or 3D GS/NeRF pipelines, or SLAM/scene reconstruction.
  • Prior work on multimodal grounding (referring expressions, spatial language, affordances) or temporal reasoning.
  • Familiarity with ROS2, DeepStream/TAO, or edge inference optimizations (TensorRT, ONNX).
  • Scalable training: Ray, distributed data loaders, sharded checkpoints.
  • Strong software craft: testing, linting, profiling, containers, and reproducibility.
  • Public code artifacts (GitHub) and first‑author publications or strong open‑source impact.

Responsibilities

  • Advance Visual Perception: Build and fine‑tune models for detection, tracking, segmentation (2D/3D), pose & activity recognition, and scene understanding (incl. 360° and multi‑view).
  • Multimodal Reasoning with VLMs: Train/evaluate vision–language models (VLMs) for grounding, dense captioning, temporal QA, and tool‑use; design retrieval‑augmented and agentic loops for perception‑action tasks.
  • Physical AI & Embodiment: Prototype perception‑in‑the‑loop policies that close the gap from pixels to actions (simulation + real data). Integrate with planners and task graphs for manipulation, navigation, or safety workflows.
  • Data & Evaluation at Scale: Curate datasets, author high‑signal evaluation protocols/KPIs, and run ablations that make results irreproducible impossible.
  • Systems & Deployment: Package research into reliable services on a modern stack (Kubernetes, Docker, Ray, FastAPI), with profiling, telemetry, and CI for reproducible science.
  • Agentic Workflows: Orchestrate multi‑agent pipelines (e.g., LangGraph‑style graphs) that combine perception, reasoning, simulation, and code‑generation to self‑check and self‑correct.

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Career Level

Intern

Education Level

Ph.D. or professional degree

Number of Employees

101-250 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service