Data & AI Platform Engineer

Brown and Caldwell

About The Position

The Digital Solutions team at Brown and Caldwell (BC) is seeking a Data & AI Platform Engineer to strengthen their cloud platform, deployment, and AI/data infrastructure capabilities. This role bridges traditional MLOps and data platform engineering with the demands of production-grade AI including retrieval-augmented generation (RAG) pipelines, multi-agent orchestration, and LLMOps to support the full lifecycle of analytics, modeling, and AI-enabled delivery workflows. The ideal candidate will have depth in platform engineering, DevOps/MLOps, and cloud infrastructure, and be able to operationalize AI systems with rigor. They will partner closely with data scientists, AI engineers, cloud architects, and domain SMEs to deliver scalable, maintainable, and auditable solutions across environmental and water-resources projects. The Data & AI Platform Engineer will design, develop, and maintain data pipelines and architectures for effective data integration, storage, and analysis, including data modeling, ETL development, and performance tuning. The role also involves troubleshooting data-related issues, focusing on data quality and accessibility, and architecting pipelines in compliance with BC and clients’ data security and industry standards.

Requirements

  • Understanding in building and optimizing data pipelines, architectures, and data sets.
  • Strong working SQL knowledge and skills in implementing and managing relational databases.
  • Proficient in ETL processes creation and management and techniques for data cleaning and validation.
  • Proficient in Python and other scripting languages applicable for data engineering.
  • Proficient with best practices for writing clean, maintainable, and scalable code while applying software engineering best practices including use of version control systems (e.g., Git).
  • Demonstrated abilities with data warehousing solutions, data lake solutions, and cloud platforms.
  • Typically, a minimum of 5 years of data engineering or related experience.
  • Typically certified in BC's SMS Framework and progressing through the SMS competencies.
  • A degree in data engineering, computer science, information technology, or related field or equivalent experience is required.

Nice To Haves

  • Hands‑on experience supporting production LLM‑ or RAG‑based systems in a platform, data, or MLOps capacity, including retrieval pipelines, vector search and embeddings, document chunking strategies, and orchestration patterns such as routing, tool use, and context management.
  • Experience with services like Azure AI Search and agentic or multi‑agent workflows is a plus.
  • Familiarity with LLMOps practices and operational tooling, including evaluation frameworks, prompt and configuration versioning, model or output drift detection, observability, and monitoring approaches (e.g., OpenTelemetry).
  • Exposure to analytics platforms and integration‑heavy systems, including APIs, workflow orchestration tools (e.g., Airflow), and modern cloud data platforms such as Databricks or Snowflake—particularly where AI‑assisted analytics or natural‑language interfaces are used to support data exploration and insight generation.
  • Experience deploying and operating AI‑enabled or analytics-heavy services in Docker‑based containerized runtimes on managed cloud platforms, using infrastructure as code with cloud providers (Azure preferred, with AWS and GCP acceptable); Kubernetes environments (AKS or equivalent) a plus where applicable.
  • Familiarity with geospatial data and analysis, such as ESRI ArcGIS, PostGIS, or geopandas.
  • Interest or experience in environmental, water resources, or scientific computing domains, with the ability to collaborate effectively across disciplines and contribute to mentoring or supporting junior team members and interns.

Responsibilities

  • Design, create and maintain data pipelines to collect, clean, transform, and load data from various sources, including sensor data, historical records, and geospatial information to facilitate data warehousing.
  • Collaborate with interdisciplinary teams of environmental engineers, data scientists, and software developers to understand data requirements and develop scalable data solutions.
  • Participate in the design of and execute the creation and management of data warehouses, data lakes, and databases to ensure efficient data storage, retrieval, and management.
  • Develop, deploy, execute, and monitor ETL (Extract, Transform, Load) processes to support data analysis, visualization, and machine learning model training.
  • Develop and maintain data models and engage in SQL database management and querying with the objective of efficiently handling stored data.
  • Design and execute testing plans for data pipeline and data warehousing implementation efforts.
  • Implement processes for improving data quality and managing data governance for enhanced reliability and accessibility.
  • Collaborate with IT infrastructure and cybersecurity teams to implement and operate data pipelines within approved data infrastructure, performance, and security guidelines.
  • Design and execute processing tasks using Python and maintain up-to-date understanding of big data processing frameworks.
  • Perform regular data audits and updates to ensure high level of data accuracy and integrity.
  • Flexibility to adapt and execute various additional assignments based on evolving needs.
  • May provide mentorship, guidance, support, and knowledge-sharing to help less experienced team members develop their skills and grow within their roles.

Benefits

  • medical
  • dental
  • vision
  • short and long-term disability
  • life insurance
  • an employee assistance program
  • paid time off
  • parental leave
  • paid holidays
  • 401(k) retirement savings plan with employer match
  • performance-based bonus eligibility
  • employee referral bonuses
  • tuition reimbursement
  • pet insurance
  • long-term care insurance
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service