TEKsystems-posted 14 days ago
$119,800 - $179,800/Yr
Full-time • Mid Level
Remote • Minneapolis, MN
5,001-10,000 employees

We are seeking a highly skilled and motivated Senior AI/ML Engineer with 5 or more years of experience in data engineering and at least 3 years in AI/ML engineering. The ideal candidate will have hands-on expertise in designing, developing, and deploying secure, scalable, and high-performance ML pipelines ensuring full compliance with industry standard security and risk framework like RMF / NIST / CMMC frameworks. The ideal candidate should have proficiency in Amazon Web Service (AWS) and/or Google Cloud Platform (GCP) with a solid foundation in data engineering, Machine Learning and MLOps cloud-native tools, and data governance. The ideal candidate should be a team player, responsible for the development and orchestration of AI/ML components of various solutions delivered by Data & A/I Practice for our clients. This is a fully remote role throughout the U.S. and entails up to 50% travel to client sites as per project need.

  • Actively involved in requirement gathering workshops from customers, translating the functional requirements into technical solutions, and translating complex technical concepts into actionable insights for stakeholders.
  • Actively participate in architectural discussions independently or under guidance / supervision from Practice Architect and/or Lead Engineer to design and develop effective, efficient, reliable, secure, and scalable data engineering solutions as per the overall data management strategy.
  • Build end-to-end machine learning pipelines using AWS (e.g., SageMaker, Lambda, S3) or GCP (e.g., Vertex AI, Cloud Functions, BigQuery) for training, evaluation, and model lifecycle management and ensure scalability, reliability, and performance of ML models in production environments.
  • Build, train, and fine-tune models using frameworks like TensorFlow, PyTorch, or Scikit-learn and apply techniques such as hyperparameter tuning, feature engineering, and model evaluation to continuously improve accuracy and efficiency.
  • Design and implement robust data ingestion, transformation, and storage solutions using cloud-native tools (e.g., AWS Glue, GCP Dataflow) while ensuring data quality, governance, and compliance following industry and/or organizational standards.
  • Develop and maintain CI/CD pipelines for ML workflows using tools like AWS CodePipeline or GCP Cloud Build automating model deployment, monitoring, and rollback strategies to support continuous delivery.
  • Implement IAM roles, VPC configurations, and encryption protocols to safeguard data and models following best practices for cost optimization and cloud security.
  • Collaborate with data scientists, DevSecOps engineers, and cybersecurity SMEs to ensure secure data processing, model deployment and operationalize the deployed models.
  • Create prototypes and evaluate emerging tools and methodologies to drive innovation within the team.
  • Occasional support to sales and pre-sales partners to convert opportunity to revenue through thought leadership in the designated area of expertise (AI/ML)
  • Bachelor’s or Master’s degree in Computer Science, Data Science, Engineering, or related field
  • 5 or more years of hands-on experience in data engineering (preferably in cloud environment) with 3 or more years of experience in Machine Learning engineering roles, preferably in secure or classified environments
  • Strong proficiency in Python, PySpark, SQL, Jupyter notebooks, and distributed computing and optionally R, Java, or Scala
  • Strong understanding of core machine learning, deep learning, and NLP
  • Deep understanding of cloud-native ML services like Amazon SageMaker, AWS Lambda, GCP Vertex AI, and BigQuery ML.
  • Proficiency in supervised, unsupervised, and deep learning techniques
  • Hands-on experience with TensorFlow, PyTorch, Scikit-learn, or similar libraries
  • Knowledge of CI/CD pipelines, model versioning, and automated deployment and experience with tools like Kubeflow, MLflow, Docker, and Kubernetes
  • Production level experience in dealing with structured, semi-structured, and unstructured data from APIs, RDBMS, and/or streaming sources into data lakes or storages [e.g., Snowflake, S3, Google Cloud Storage (GCS), etc.]
  • Ability to design robust evaluation metrics and monitor model performance post-deployment and experience with drift detection, retraining strategies, and alerting mechanisms
  • Solid understanding of data privacy, IAM roles, encryption, and compliance standards (e.g., GDPR, HIPAA) and ability to apply the knowledge to implement secure ML solutions in cloud environments
  • Strong analytical skills to translate business problems into ML solutions as well as troubleshoot complex issues across data, model, and infrastructure layers
  • Excellent verbal and written communication skills
  • Ability to work cross-functionally with product managers, data scientists, and engineering teams
  • Passion for staying updated with the latest in AI/ML research and cloud technologies and ability to evaluate and adopt emerging tools and methodologies
  • Familiarity with DoD data strategy, RMF / NIST / CMMC / FedRAMP frameworks
  • Experience with Generative AI, LLMs, transformer architecture, and prompt engineering
  • Knowledge of Agentic AI frameworks
  • Industry recognized associate or advanced level AI/ML certification from AWS/ GCP / Snowflake / Databricks Certification such as: o AWS Machine Learning Engineer – Associate o AWS Machine Learning – Specialty o GCP - Professional Machine Learning Engineer o Databricks Certified Machine Learning Associate o Databricks Certified Machine Learning Professional
  • Medical, dental & vision
  • 401(k)/Roth
  • Insurance (Basic/Supplemental Life & AD&D)
  • Short and long-term disability
  • Health & Dependent Care Spending Accounts (HSA & DCFSA)
  • Transportation benefits
  • Employee Assistance Program
  • Tuition Assistance
  • Time Off/Leave (PTO, Paid Family Leave)
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service