Junior Data Platform Engineer

TruliooSan Diego, CA
9dHybrid

About The Position

Are you ready to embark on a career that truly affects people around the world? Trulioo invites you to be a catalyst for change in the dynamic realm of digital identity verification. As the global front-runner in our industry, we are redefining how businesses grow, innovate and comply online. Picture yourself at the forefront of innovation, contributing to our award-winning platform that enables organizations worldwide to quickly onboard customers, optimize costs and combat fraud. Fueled by Silicon Valley support, Trulioo stands as the trusted platform that can verify more than 5 billion people and 700 million business entities spanning 195 countries. But Trulioo is more than a tech company. We are a united force of dedicated experts committed to establishing trust online. Headquartered in Vancouver and with strategic hubs in San Diego and Dublin, we foster a culture of collaboration and open communication. Our offices support a hybrid model and staff typically work three days per week at a hub location. Join us where excitement meets innovation and contribute to a world where trust and technology unite. Position Summary: We’re looking for a Junior Data Platform Engineer who’s eager to grow their skills in building data pipelines, modeling complex information, and applying machine learning to improve data quality and search systems. This is an exciting opportunity to learn modern data engineering practices while contributing to the development of systems that support person and business verification services. You’ll collaborate with experienced engineers and data scientists, gaining hands-on experience across relational databases, NoSQL, and vector-based search technologies. If you enjoy tackling data challenges, exploring open-source tools, and are excited to grow into a well-rounded data engineer, we’d love to meet you. This is a full-time hybrid position based out of our Sorrento Valley office in San Diego with three in-office days per week.

Requirements

  • 2 years of professional experience (or strong academic / open-source / project experience) in data engineering, ML engineering, or software development.
  • Solid programming fundamentals, ideally in Python.
  • Basic understanding of data modeling, ETL concepts, and working with databases (SQL or NoSQL).
  • Exposure to cloud environments (AWS, GCP, or Azure) through coursework, internships, or personal projects.
  • Eagerness to learn and apply best practices in data engineering and MLOps.
  • Passionate about learning and experimenting with new data technologies.
  • Strong analytical thinking and problem-solving skills.
  • Effective communicator - comfortable asking questions and collaborating with peers.
  • Self-motivated, curious, and eager to take ownership of projects.
  • Excited to grow within a team that values mentorship, collaboration, and continuous improvement

Nice To Haves

  • Hands-on experience from academic or open-source projects related to data processing, machine learning, or information retrieval.
  • Familiarity with ETL / workflow tools (Airflow, Prefect, Dagster, etc.).
  • Some exposure to vector databases, embedding-based search, or semantic search pipelines.
  • Curiosity about graph databases, streaming systems, or data orchestration frameworks.
  • Active participation in open-source projects or data-related online communities.
  • Contributions to group projects, hackathons, or research that involve data modeling, ML integration, or large-scale data management.
  • Demonstrated curiosity about LLMs, semantic search, or retrieval-augmented generation (RAG) systems.

Responsibilities

  • Assist in building and maintaining data ingestion and transformation pipelines from internal and external data sources.
  • Learn to design data models for SQL, NoSQL, and modern data stores (e.g., vector or graph databases).
  • Support efforts to integrate and test ML models for tasks like entity resolution, semantic enrichment, and similarity search.
  • Work with senior engineers to optimize data workflows, monitor data quality, and improve pipeline performance.
  • Contribute to documentation, testing, and experimentation around new data tools and techniques.
  • Collaborate with data scientists and software engineers to deliver reliable, scalable data solutions.

Benefits

  • Comprehensive Benefits: We provide a robust benefits package for full-time, permanent employees, including health, dental, and vision coverage, retirement plans with company match, paid time off, parental leave, and an annual education & training stipend (equivalent to $1,000 in local currency). Specific benefits may vary by location and will be discussed further during the interview process.
  • Flexible Hybrid Working Environment: Our offices are designed to support both collaboration and flexibility. Enjoy weekly lunches, quality coffee, and regular social events. Many locations also feature parent rooms, on-site gyms, comfortable lounges, and adaptable workstations to support your comfort and productivity.
  • Wellness: We care about your well-being. Team members have access to wellness workshops and events, as well as a complimentary Headspace subscription to help you stay focused, grounded, and energized.
  • Employee Resource Groups: Belonging is an important part of doing your best work. Our ERGs provide an inclusive space, support and community for employees of diverse backgrounds and allies. We host informative, fun sessions and celebrations that are often open to the entire organization.

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Entry Level

Education Level

No Education Listed

Number of Employees

251-500 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service