About The Position

The Comcast SPIDER team is seeking an Engineer II with a strong foundation in software development to help build AI-driven solutions and scalable data pipelines. You’ll collaborate with cybersecurity researchers, data engineers, software developers, and platform teams to turn business requirements into proof of concepts for cybersecurity tools and products. This role is perfect for early-career engineers who have completed internships or academic projects and are ready to contribute to real-world solutions.

Requirements

  • Bachelor’s degree in Computer Science, AI, Engineering, or related field; or equivalent practical experience.
  • Proficiency in Python and SQL.
  • Experience with data wrangling, ETL/ELT concepts, and working with relational and/or NoSQL databases.
  • AI/ML Basics: Exposure to machine learning workflows (training, validation, inference) via coursework, projects, or internships; familiarity with libraries like NumPy, Pandas, SciPy, scikit-learn, PyTorch or TensorFlow.
  • Familiarity with LLM frameworks (e.g., LangChain, LangGraph).
  • Internship/co-op or open-source contributions in data or AI.
  • Strong problem-solving, communication, and teamwork skills.

Nice To Haves

  • MS or PhD in CS, AI, Math, or related fields.
  • 2–3 years of experience building ML models and pipelines.
  • Experience with Databricks, or cloud data services (e.g., Azure Data Factory/Synapse, AWS Glue/Redshift).
  • Experience with LLM workflows (prompt engineering, RAG, and vector databases).
  • Familiarity with CI/CD (GitHub Actions) and Git.
  • Familiarity with building agentic workflows.
  • Awareness of Cybersecurity principles.
  • Knowing prompt engineering (LLM) security best practices (e.g., prompt injection) is a plus.

Responsibilities

  • Design & Build Data Pipelines: Develop batch and streaming pipelines for data ingestion, transformation, and quality validation using tools like Python, SQL, and/or cloud-native services.
  • AI Application Development: Implement inference services, prompts/workflows, and retrieval pipelines (RAG) leveraging vector databases and embeddings.
  • Data Engineering Fundamentals: Implement data models (dimensional/star schemas), optimize queries, and contribute to data lake/warehouse design.
  • Performance Assessments: Perform functional testing for accuracy and implement iterative improvements.
  • Documentation & Collaboration: Author technical specs, runbooks, and knowledge base articles; collaborate closely with cross-functional stakeholders.
  • Security & Compliance: Follow secure coding practices and data governance policies. Implement security guardrails to enforce safe, read-only operations and prevent injection attacks.
  • Continuous Learning: Stay current on AI/ML and data engineering best practices; contribute to internal tooling, templates, and reusable components.
  • Other duties and responsibilities as assigned.

Benefits

  • We believe that benefits should connect you to the support you need when it matters most, and should help you care for those who matter most. That's why we provide an array of options, expert guidance and always-on tools that are personalized to meet the needs of your reality—to help support you physically, financially and emotionally through the big milestones and in your everyday life. Please visit the benefits summary on our careers site for more details.

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Entry Level

Number of Employees

5,001-10,000 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service