About The Position

A Senior Data Engineer at Filevine is a hands-on individual contributor who designs, builds, and operates the data systems that power LOIS, our analytics products, and the agentic AI experiences our customers rely on. This role sits within the Data Engineering team and is focused on optimizing and extending Filevine's conversational self-service analytics solution — making natural-language access to legal operational data faster, smarter, and more reliable. You will partner closely with product, analytics, and AI engineering to turn raw legal and operational data into trusted, query-ready, agent-ready data products. We recognize and expect this role to spend up to 30% of time on non-coding activities including design, review, and cross-functional collaboration.

Requirements

  • 5+ years of professional data engineering or backend engineering experience, with a proven track record of delivering production-grade data systems that drive measurable business outcomes.
  • Significant hands-on experience operating a modern cloud data warehouse in production (e.g., Snowflake, BigQuery, Redshift, Databricks, Synapse, or equivalent) — including performance tuning, warehouse and cost management, role-based access control, and orchestration of warehouse-native compute (stored procedures, UDFs, streams/tasks, or equivalent).
  • Demonstrated experience building with Agentic AI or LLM-powered systems in production — e.g., RAG pipelines, tool-using agents, MCP servers, warehouse-native LLM functions (such as Snowflake Cortex, BigQuery ML, or Databricks AI), or comparable frameworks.
  • Expertise in advanced SQL and Python for building reliable, well-tested data pipelines and transformations.
  • Experience with modern data modeling and transformation tooling such as dbt, including testing, documentation, and backward-compatible model design that supports self-service analytics.
  • Experience with workflow orchestration (Airflow, Dagster, or similar) and cloud-native deployment on AWS, Azure, or GCP.
  • Strong fundamentals in data modeling (dimensional, star/snowflake schemas), distributed systems, performance tuning, and data quality / observability principles.
  • Professional experience with modern software development methodologies: Agile/Kanban, Git, CI/CD, and DevOps.
  • Excellent written and verbal communication skills, with the ability to explain complex technical and data concepts to both technical and non-technical stakeholders.
  • B.S., M.S., or Ph.D. in Computer Science, Information Systems, Engineering, or a related field — or equivalent professional experience

Nice To Haves

  • Hands-on Snowflake experience, including Snowpipe, streams/tasks, data sharing, and cost/governance tuning at scale.
  • Experience with Snowflake Cortex Analyst specifically, including authoring and iterating on semantic models and verified queries.
  • .NET / C# experience, or familiarity with reading and integrating against a .NET-based application backend.
  • Experience using modern UI development tools, particularly Svelte or React
  • Experience supporting machine learning workflows: feature stores, training datasets, or real-time scoring infrastructure.
  • Experience in SaaS or product-led growth environments, including product analytics and revenue/usage telemetry.
  • Infrastructure-as-code experience (Terraform), containerization (Docker, Kubernetes), and deployment (Octopus).
  • Familiarity with the legal tech domain, document-heavy data, or working with unstructured data at scale.
  • Track record of mentoring engineers and contributing to hiring and team-building.

Responsibilities

  • Optimize and improve Filevine's production usage of Snowflake and Cortex features — including warehouse management (usage, sizing, monitoring, etc.), clustering, query performance tuning, cost governance, and storage efficiency.
  • Own and evolve our agentic data modeling and natural language data retrieval (text-to-sql) capabilities: build and curate semantic models, refine prompts, expand verified question libraries, and measure answer quality so that natural-language analytics get more accurate over time.
  • Design and build batch and streaming data pipelines that ingest, transform, and model data from Filevine's product, CRM, billing, and telemetry systems into trusted, well-documented data products.
  • Build the data foundations that power agentic AI workflows and LOIS — including feature pipelines, retrieval datasets, and low-latency serving paths for LLM-based reasoning over customer data.
  • Establish reliability and governance standards including data quality checks, lineage, monitoring, incident response, access control, and PII handling consistent with our compliance posture.
  • Partner with product and engineering stakeholders to define event contracts, model business concepts (matters, firms, users, billing) consistently, and reduce ambiguity across downstream consumers.
  • Lead the evaluation and adoption of emerging tools across the modern data stack, recommending right-fit solutions that align with Filevine's strategic and security goals.
  • Provide technical mentorship within the Data Engineering team, contribute to code reviews and design documents (DDs/ADRs), and help raise the bar on data engineering practice at Filevine.
  • Participate in on-call rotations to maintain SLAs for production data pipelines and analytics surfaces.

Benefits

  • Medical, Dental, & Vision Insurance (for full-time employees)
  • Maternity & paternity leave (for full-time employees)
  • Short & long-term disability
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service