Workday-posted 7 months ago
$106,400 - $159,600/Yr
Full-time • Mid Level
Hybrid • Atlanta, GA
Publishing Industries

At Workday, we are the Machine Learning Product team focused on applying machine learning and statistical analysis to our products. We build data-driven products that help organizations uncover insights and make strategic decisions. As a Software Engineer III, you will develop ML powered features for our HR & Talent product portfolio, working closely with ML engineers and other software teams. You will be responsible for designing and developing new APIs/microservices deployed at scale, applying modern ML Ops, DevOps, and data engineering stacks.

  • Work with multi-functional teams to deliver scalable, secure and reliable solutions.
  • Engage with data scientists, software engineers, ML engineers, PMs and architects in requirements elaboration.
  • Develop software features from end to end including infrastructure as code.
  • Design and build developer tools and services that enable ML capabilities.
  • Participate in architecture reviews, code reviews and technology evaluation.
  • Research, evaluate, prototype and drive adoption of new ML tools and services.
  • Build and optimize data storage solutions to handle large volumes of structured and unstructured data.
  • Build systems and dashboards to monitor service & ML health.
  • 5 or more years of validated software industry experience.
  • Proven experience in software development with proficiency in at least one programming language (e.g., Python, Go).
  • Bachelor's and/or master's degree in computer science or computer engineering.
  • Optimize public cloud-based infrastructure (AWS, GCP) to support machine learning workloads.
  • Implement and manage CI/CD workflows to automate testing, integration, and delivery of machine learning components.
  • Professional experience in building web applications and microservices and API design.
  • Experience with running and maintaining Databricks, Sagemaker, & Apache Spark as a Data Platform service.
  • Experience with big data technologies and frameworks (e.g., Spark, Flink, Hadoop, Kafka).
  • Hands-on experience with data warehousing concepts and ETL/ELT principles.
  • Implementation and operation of distributed systems.
  • Troubleshoot and resolve performance bottlenecks, system outages, and other operational issues.
  • Experience with Data Engineering and/or ML systems.
  • Experience with tools cloud data platform and services like Databricks, Sagemaker, EMR.
  • Strong problem-solving skills and ability to work in a fast-paced environment.
  • Experience with MLOps platforms like Kubeflow and pipeline orchestrators like Airflow, Dagster and Kubeflow Pipelines.
  • Familiarity with containerization and orchestration technologies (e.g., Docker, Kubernetes).
  • Knowledge of data governance, data cataloging, and metadata management tools.
  • Experience with data security best practices.
  • Experience with Online and Batch FeatureStore like Feast.
  • Workday Bonus Plan or role-specific commission/bonus.
  • Annual refresh stock grants.
  • Flexible work schedule with at least 50% in-office time each quarter.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service