Data Science Intern

Cooper StandardLivonia, MI
20hHybrid

About The Position

The Role We are seeking a Data Science Intern to help develop and deliver the core products of our AI stack: trained models of process physics and AI controllers for edge deployment. Unlike a traditional theoretical research role, this position focuses heavily on implementation. You will define, automate, and optimize the entire analytics path—including signal processing, training, validation, deployment, and monitoring. You will work within a dynamic team to deploy models for inference within mission-critical manufacturing environments, optimizing for real-time response.

Requirements

  • Core Languages: Proficiency in Python is required.
  • Libraries & Frameworks: Expertise with data analysis and ML libraries, specifically TensorFlow, PyTorch, Numpy, Pandas, Scikit-learn, and Spark ML.
  • DevOps & Environment: Fluency with standard DevOps tools. Experience working on Linux ML libraries and compilation.
  • Cloud Technologies: Familiarity with cloud technologies such as AWS, Azure, Databricks, or Hadoop/Spark.
  • Data Visualization: Expertise in data visualization techniques to present actionable insights.
  • Time Series & Deep Learning: Demonstrated skills in time series data analysis and Deep Learning architectures including LSTM, RNN, and CNN.
  • Reinforcement Learning: Understanding of reinforcement learning techniques for developing optimal control policies.
  • Education: Currently pursuing or recently completed a Master’s Degree or PhD in Computer Science, Engineering, or Science.
  • Problem Solving: Intellectual curiosity, entrepreneurial drive, and innovative thinking.
  • Communication: Ability to explain moderately complex information in a concise manner to both specialists and non-technical audiences.

Responsibilities

  • Pipeline & Code Development: Design, develop, and iterate on algorithms using robust coding practices, including the use of code development tools like GitHub.
  • Model Deployment: Experience deploying models in local or cloud environments. You will help deploy policies to the edge where they run real-time inference and continuously make adjustments to physical hardware.
  • Data Engineering: Assess new data sources and interact with data ingestion pipelines or data warehouses.
  • Automation: Automate the creation of automated controls and the analytics path to ensure the technology is scalable.
  • Collaboration: Collaborate with process domain experts to determine what data to collect to ensure high data quality.

Benefits

  • Competitive compensation.
  • The opportunity to join a startup with funding, a major customer, and a world-class tech center.
  • Access to full-scale production lines to generate proprietary data and test your code on physical hardware.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service