Data Science Intern

Cooper-Standard AutomotiveLivonia, MI
20hHybrid

About The Position

The Role We are seeking a Data Science Intern to help develop and deliver the core products of our AI stack: trained models of process physics and AI controllers for edge deployment. Unlike a traditional theoretical research role, this position focuses heavily on implementation. You will define, automate, and optimize the entire analytics path—including signal processing, training, validation, deployment, and monitoring. You will work within a dynamic team to deploy models for inference within mission-critical manufacturing environments, optimizing for real-time response. Key Responsibilities Pipeline & Code Development: Design, develop, and iterate on algorithms using robust coding practices, including the use of code development tools like GitHub. Model Deployment: Experience deploying models in local or cloud environments. You will help deploy policies to the edge where they run real-time inference and continuously make adjustments to physical hardware. Data Engineering: Assess new data sources and interact with data ingestion pipelines or data warehouses. Automation: Automate the creation of automated controls and the analytics path to ensure the technology is scalable. Collaboration: Collaborate with process domain experts to determine what data to collect to ensure high data quality. Required Technical Skills We prioritize candidates with strong software engineering fundamentals and fluency in modern data stacks. Core Languages: Proficiency in Python is required. Libraries & Frameworks: Expertise with data analysis and ML libraries, specifically TensorFlow , PyTorch , Numpy, Pandas, Scikit-learn, and Spark ML. DevOps & Environment: Fluency with standard DevOps tools. Experience working on Linux ML libraries and compilation. Cloud Technologies: Familiarity with cloud technologies such as AWS, Azure, Databricks, or Hadoop/Spark. Data Visualization: Expertise in data visualization techniques to present actionable insights. Domain Knowledge While programming is the primary focus, familiarity with the following application areas is essential: Time Series & Deep Learning: Demonstrated skills in time series data analysis and Deep Learning architectures including LSTM, RNN, and CNN. Reinforcement Learning: Understanding of reinforcement learning techniques for developing optimal control policies. Qualifications & Soft Skills Education: Currently pursuing or recently completed a Master’s Degree or PhD in Computer Science, Engineering, or Science. Problem Solving: Intellectual curiosity, entrepreneurial drive, and innovative thinking. Communication: Ability to explain moderately complex information in a concise manner to both specialists and non-technical audiences. What's In It for You Competitive compensation. The opportunity to join a startup with funding, a major customer, and a world-class tech center. Access to full-scale production lines to generate proprietary data and test your code on physical hardware.

Requirements

  • Proficiency in Python is required.
  • Expertise with data analysis and ML libraries, specifically TensorFlow , PyTorch , Numpy, Pandas, Scikit-learn, and Spark ML.
  • Fluency with standard DevOps tools.
  • Experience working on Linux ML libraries and compilation.
  • Familiarity with cloud technologies such as AWS, Azure, Databricks, or Hadoop/Spark.
  • Expertise in data visualization techniques to present actionable insights.
  • Demonstrated skills in time series data analysis and Deep Learning architectures including LSTM, RNN, and CNN.
  • Understanding of reinforcement learning techniques for developing optimal control policies.
  • Currently pursuing or recently completed a Master’s Degree or PhD in Computer Science, Engineering, or Science.
  • Intellectual curiosity, entrepreneurial drive, and innovative thinking.
  • Ability to explain moderately complex information in a concise manner to both specialists and non-technical audiences.

Responsibilities

  • Design, develop, and iterate on algorithms using robust coding practices, including the use of code development tools like GitHub.
  • Deploy models in local or cloud environments.
  • Deploy policies to the edge where they run real-time inference and continuously make adjustments to physical hardware.
  • Assess new data sources and interact with data ingestion pipelines or data warehouses.
  • Automate the creation of automated controls and the analytics path to ensure the technology is scalable.
  • Collaborate with process domain experts to determine what data to collect to ensure high data quality.

Benefits

  • Competitive compensation.
  • The opportunity to join a startup with funding, a major customer, and a world-class tech center.
  • Access to full-scale production lines to generate proprietary data and test your code on physical hardware.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service