Machine Learning Engineer (Production and Deployment)
Songfinch
·
Posted:
August 29, 2023
·
Hybrid
About the position
As a Machine Learning Engineer (Production and Deployment) at Songfinch, you will play a crucial role in architecting, implementing, managing, and maintaining the machine learning production and deployment lifecycle. Your main responsibilities will include transitioning ML research prototypes into at-scale products, implementing efficient deployment strategies for ML models, and ensuring smooth functionality. Additionally, you will be responsible for integrating monitoring and troubleshooting tools, optimizing the ML prototype-to-product pipeline, and staying up-to-date with advancements in ML and AI. This is an exciting opportunity to directly influence the future of the company and the music industry as a whole.
Responsibilities
- Implement efficient deployment strategies for ML models
- Rapidly transition and manage the continuous pipeline of ML research prototypes into at-scale products
- Productionize, package, and deploy ML applications
- Ensure smooth, logical, transparent functionality
- Integrate tools for monitoring, logging, alerting, troubleshooting, debugging deployed ML applications
- Continuously optimize our ML prototype-to-product pipeline
- Proactively identify potential bottlenecks and issues, and work with the broader team(s) to resolve them
- Partner with internal teams to troubleshoot and debug ML applications when needed
- Be in-tune with the developments and advancements of ML and AI
- Identify relevant trends, tools, tech, and processes that could be incorporated as potential differentiators (within production, deployment, and beyond)
Requirements
- B.S. in Computer Engineering, Computer Science, Electrical Engineering, Physics, Applied Math, or other relevant STEM degrees
- 3+ years of hands-on experience with relevant tech/tools
- Basics (Conda, Git, GitHub, Python)
- DevOps Tools
- Containers (Docker, etc.)
- Container Orchestration (Kubernetes, Amazon Elastic Container Service, Amazon Elastic Kubernetes Service, etc.)
- Workflow Orchestration (Apache Airflow, Flyte, AWS Step Functions, AWS Lambda, etc.)
- AWS Cloud Computing Other (Elastic Compute Cloud, S3, Sagemaker, etc.)
- MLOps tools (MLflow, AWS Sagemaker, Kubeflow, etc.)
- ML platforms/frameworks/libraries (Keras, Matplotlib, NumPy, Pandas, PyTorch/Lightning, scikit-learn, TensorFlow, etc.)
- Adaptive, collaborative, and a problem-solving mentality
- Inherent builder of things, with an insatiable curiosity / desire to continuously learn
- A strong preference for (and
Benefits
- Implement efficient deployment strategies for ML models
- Rapidly transition and manage the continuous pipeline of ML research prototypes into at-scale products
- Productionize, package, and deploy ML applications
- Ensure smooth, logical, transparent functionality
- Integrate tools for monitoring, logging, alerting, troubleshooting, debugging deployed ML applications
- Continuously optimize our ML prototype-to-product pipeline
- Proactively identify potential bottlenecks and issues, and work with the broader team(s) to resolve them
- Partner with internal teams to troubleshoot and debug ML applications when needed
- Be in-tune with the developments and advancements of ML and AI
- Identify relevant trends, tools, tech, and processes that could be incorporated as potential differentiators (within production, deployment, and beyond)
- Adaptive, collaborative, and a problem-solving mentality
- Inherent builder of things, with an insatiable curiosity / desire to continuously learn
- A strong preference for (and history of) hands-on and action-oriented execution
- Strong communication skills
- Self-motivated and ignited by fast-paced environments