Software Engineer (AI)

OMERSToronto, ON
Onsite

About The Position

Join a global workplace that empowers your impact, embraces diversity, and allows you to personalize your employee journey. OMERS is a purpose-driven, dynamic, and sustainable pension plan, and an industry-leading global investor with teams across North America and Europe. We serve 665,000 members, placing their best interests at the heart of everything we do. This role offers an opportunity to accelerate growth, prioritize wellness, build connections, and support communities. We are seeking a highly motivated early-career Software Engineer to join the Pension Products & Technology team in Toronto. This position is at the intersection of modern software engineering and emerging AI/ML capabilities, focusing on building next-generation digital solutions for OMERS Pensions. You will be part of a high-impact team leveraging cloud, distributed systems, and Generative AI to deliver intelligent, scalable, and secure solutions for members and internal users. This is an opportunity to learn, grow, and contribute to real-world AI-powered systems alongside experienced architects and engineers.

Requirements

  • 1–2 years of hands-on experience in software development
  • Strong programming skills in at least one language: Python or Java
  • Foundational understanding of Neural Networks and Transformers (attention, embeddings, tokenization) and how they impact LLM behavior
  • Practical knowledge of LLMs/Generative AI, including prompt engineering, hallucinations, context limits, and output variability
  • Understanding of RAG, embeddings, and vector search, and when to use them vs prompt-only approaches
  • Awareness of fine-tuning vs prompting vs grounding trade-offs
  • Ability to analyze and debug LLM outputs and define basic evaluation criteria (accuracy, completeness, format adherence)
  • Experience with AI/ML libraries or platforms (Langchain, TensorFlow, PyTorch, or similar)
  • Exposure to AI APIs such as Azure OpenAI or similar platforms
  • Basic understanding of backend development frameworks (Spring Boot or Fast API etc.)
  • Familiarity with APIs and web services (REST preferred)
  • Understanding of SQL and/or NoSQL databases (Postgres, MongoDB, etc.)
  • Basic knowledge of cloud platforms (Azure or GCP preferred)
  • Understanding of software development lifecycle and Agile methodologies
  • Exposure to microservices and distributed systems concepts
  • Familiarity with containerization (Docker) and basic Kubernetes concepts is a plus
  • Strong problem-solving and analytical skills
  • Ability to learn quickly and adapt to new technologies
  • Basic understanding of Microservices architecture
  • Event-driven design
  • Distributed systems fundamentals
  • API design principles
  • Awareness of system scalability, reliability, and performance considerations
  • Understanding of security and data privacy best practices
  • Strong communication and collaboration skills
  • Ability to work in a team-oriented environment
  • Eagerness to learn and grow in both software engineering and AI domains
  • Attention to detail and commitment to quality
  • Ability to take initiative and contribute ideas
  • Bachelor’s degree in computer science, Engineering, or equivalent demonstrated experience through projects, internships, or work experience

Nice To Haves

  • Internship or project experience in AI/ML or cloud-based applications
  • Exposure to CI/CD pipelines and DevOps practices
  • Knowledge of data processing and analytics workflows

Responsibilities

  • Design, develop, test, and deploy scalable backend services and applications
  • Contribute to end-to-end feature development including design, implementation, and deployment
  • Assist in building and maintaining architectural artifacts such as data flows, APIs, and deployment models
  • Collaborate with senior engineers and architects to implement scalable and secure system designs
  • Participate in code reviews and contribute to improving code quality, maintainability, and performance
  • Work with cloud platforms (Azure or GCP) to deploy and manage applications
  • Develop and integrate APIs (REST, gRPC)
  • Support microservices-based and event driven architectures
  • Contribute to Proof of Concepts (POCs) for new technologies, especially in AI/ML and GenAI space
  • Work closely with DevOps teams to support CI/CD pipelines and deployments
  • Create and maintain technical documentation
  • Collaborate with product owners and business teams to understand requirements and translate them into technical solutions
  • Understand fundamentals of Machine Learning and Neural Networks
  • Connect neural network fundamentals (weights, training data distribution, loss optimization) to observable issues like hallucination, bias, and overconfidence in responses
  • Build a practical understanding of how LLMs generate outputs (token-by-token prediction, temperature, top-p) and tune them based on use case needs (deterministic vs creative tasks)
  • Work with RAG pipelines and evaluate retrieval quality (relevance of chunks, ranking effectiveness, context injection impact on answers)
  • Evaluate LLM outputs using practical techniques like A/B testing prompts, golden datasets, and regression testing across model versions
  • Define use-case specific evaluation criteria (e.g., factual accuracy, completeness, format adherence, reasoning correctness) instead of relying on generic benchmarks
  • Analyze failures by mapping them to root causes such as context window limits, poor retrieval grounding, token truncation, or attention dilution
  • Assist in building AI-powered features such as summarization, classification, and insights generation
  • Design prompts and system instructions informed by how transformers prioritize context (e.g., instruction placement, few-shot positioning)
  • Support fine-tuning or customization approaches for AI models (where applicable)
  • Build test cases that intentionally stress model weaknesses (long context, conflicting instructions, ambiguous queries)
  • Integrate AI services (Azure OpenAI, Gemini, or Claude) into applications
  • Ensure responsible AI usage, including data privacy and security considerations

Benefits

  • annual Incentive Award pursuant to our Short-term Incentive plan and our Long-Term Incentive plan (if applicable)
  • group benefits
  • retirement plans
  • accelerate your growth & development
  • prioritize wellness
  • build connections
  • support the communities where we live and work
  • inclusive, barrier-free recruitment and selection process
  • vast network of Employee Resource Groups with executive leader sponsorship
  • Purpose@Work committee
  • employee recognition programs
  • commitment to developing a best in class approach to complete wellness for our employees and members
  • invest in our people, providing them with opportunities so they can develop and grow
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service