About The Position

As an AI & Data Engineer at Kyndryl, you'll play a key role in designing and building next-generation, data- and AI-powered solutions. You will develop proof-of-concept models and prototypes, architect cloud-native data platforms, and transform complex business and technical requirements into scalable, production-grade systems. Your work will span the full lifecycle of data and AI solutions, from ingestion and transformation to model training, deployment, and monitoring. You will work hands-on with modern data engineering and AI technologies, ensuring the reliability, scalability, and performance of vendor platforms, cloud services, and custom-built solutions. You will act as a technical reference point for users working with highly sophisticated data and AI systems, including real-time data pipelines, large-scale distributed processing frameworks, and machine learning infrastructures. Your responsibilities will include designing, building, and managing modern data platforms using technologies such as Python, SQL, Spark, Kafka, and cloud-native services. You will work with relational and non-relational databases, including Oracle, MSSQL, MySQL, PostgreSQL, as well as data warehouses and lakehouse platforms. You will continuously monitor and optimize performance, reliability, and security, implementing best practices in data governance, data quality, and model lifecycle management. In addition to hands-on technical responsibilities, you will collaborate closely with cross-functional teams, playing a leading role in AI and data platform initiatives. You will stay up to date with the latest advancements in AI, data engineering, and cloud technologies, proactively evaluating and introducing new tools, architectures, and methodologies that keep the organization at the forefront of innovation.

Requirements

  • Bachelor’s degree in Computer Science, Engineering, Information Technology, or a related field
  • 8+ years of experience in data engineering, AI engineering, or platform engineering roles
  • Strong proficiency in Python and SQL, and experience with data processing frameworks such as Apache Spark, Flink, or Beam
  • Experience designing and operating data pipelines, ETL/ELT processes, and real-time streaming architectures (e.g., Kafka, cloud-native messaging services)
  • Hands-on experience with cloud platforms (AWS, Azure, or Google Cloud) and managed data services
  • Solid understanding of data architecture, distributed systems, and modern data engineering best practices
  • Experience with model deployment patterns, APIs, and basic MLOps practices

Nice To Haves

  • Experience with cloud-native data platforms and lakehouse/warehouse technologies (e.g., Snowflake, BigQuery, Databricks, Redshift)
  • Familiarity with MLOps tools and practices (e.g., MLflow, Kubeflow, CI/CD for ML, model monitoring)
  • Knowledge of containerization and orchestration technologies (Docker, Kubernetes)
  • Strong analytical and problem-solving skills with high attention to detail
  • Excellent communication and cross-functional collaboration skills
  • Ability to work independently and thrive in fast-paced, evolving environments

Responsibilities

  • Designing and building next-generation, data- and AI-powered solutions
  • Develop proof-of-concept models and prototypes
  • Architect cloud-native data platforms
  • Transform complex business and technical requirements into scalable, production-grade systems
  • Work with modern data engineering and AI technologies, ensuring the reliability, scalability, and performance of vendor platforms, cloud services, and custom-built solutions
  • Act as a technical reference point for users working with highly sophisticated data and AI systems, including real-time data pipelines, large-scale distributed processing frameworks, and machine learning infrastructures
  • Designing, building, and managing modern data platforms using technologies such as Python, SQL, Spark, Kafka, and cloud-native services
  • Work with relational and non-relational databases, including Oracle, MSSQL, MySQL, PostgreSQL, as well as data warehouses and lakehouse platforms
  • Continuously monitor and optimize performance, reliability, and security, implementing best practices in data governance, data quality, and model lifecycle management
  • Collaborate closely with cross-functional teams, playing a leading role in AI and data platform initiatives
  • Stay up to date with the latest advancements in AI, data engineering, and cloud technologies, proactively evaluating and introducing new tools, architectures, and methodologies that keep the organization at the forefront of innovation

Benefits

  • Every position at Kyndryl offers a way forward to grow your career.
  • You’ll have access to data, hands-on learning experiences, and the chance to certify in all four major platforms.
  • Kyndryl cares about your well-being and prides itself on offering benefits that give you choice, reflect the diversity of our employees and support you and your family through the moments that matter – wherever you are in your life journey.
  • Our employee learning programs give you access to the best learning in the industry to receive certifications, including Microsoft, Google, Amazon, Skillsoft, and many more.
  • Through our company-wide volunteering and giving platform, you can donate, start fundraisers, volunteer, and search over 2 million non-profit organizations.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service