About The Position

GovCIO is currently hiring for Data Warehousing Specialist (Network Engineer) of Infrastructure Operations This position will be located within the United States and will be fully remote. GovCIO is a team of transformers--people who are passionate about transforming government IT. Every day, we make a positive impact by delivering innovative IT services and solutions that improve how government agencies operate and serve our citizens.But we can't do it alone. We need great people to help us do great things - for our customers, our culture, and our ability to attract other great people. We are changing the face of government IT and building a workforce that fuels this mission. Are you ready to be a transformer?

Requirements

  • Bachelor's with 13+ years (or commensurate experience) OR Masters Degree or higher (in a related discipline) with 10 years
  • Skills in data warehousing and specifically processing of computer generated log data.
  • Clearance Required: Must be able to obtain and maintain AOUSC Public Trust

Nice To Haves

  • Extensive experience with Cribl data engine

Responsibilities

  • Develop apply best practices and tools for data ingestion, indexing, and management to optimize data sources and refine data collection processes to capture only pertinent data.
  • Plan and perform Cribl platform upgrades (Leader, Worker, and Edge nodes) following defined change control procedures.
  • Manage and optimize the Cribl distributed infrastructure, ensuring scalability, stability, and efficient data routing.
  • Continuously monitor Cribl performance, including throughput, queue depth, and worker health metrics.
  • Develop and maintain Cribl pipelines for new data sources, implementing filtering, sampling, and enrichment logic.
  • Migrate existing Splunk forwarder-based data inputs to Cribl for improved control and flexibility.
  • Build and maintain Cribl Packs for standardized configurations across multiple environments.
  • Implement data reduction and enhancement workflows to minimize ingestion volume and improve data quality.
  • Maintain and enhance Ansible playbooks for automated deployments, configurations, and upgrades.
  • Integrate GitOps CI/CD pipelines (e.g., GitLab, Jenkins, Terraform) to manage configuration-as-code for both Splunk and Cribl.
  • Develop, test, and review merge requests related to dashboards, alerts, saved searches, and data onboarding pipelines.
  • Perform Splunk core upgrades (indexers, search heads, cluster masters, deployers) ensuring backward compatibility and minimal downtime.
  • Upgrade and validate Splunk Add-ons and Apps, maintaining functionality and CIM compliance.
  • Develop and maintain custom props, transforms, eventtypes, and lookups to normalize data consistently.
  • Ensure CIM compliance for all add-ons and sourcetypes used across the platform.
  • Handle escalations from Operations and perform deep-dive troubleshooting on ingestion, parsing, or performance issues.
  • Perform break/fix analysis on Splunk core services such as KVStore, clustering, deployment server, and scheduler.
  • Conduct performance tuning for search optimization, bucket management, and scheduler balancing across SHC.
  • Design and maintain retention, archival, and index management strategies to align with business and compliance goals.
  • Manage license allocation, volume forecasting, and capacity planning across indexer clusters.
  • Develop and maintain monitoring and alerting integrations for Cribl and Splunk infrastructure health.
  • Collaborate with Operations on incident triage, root cause analysis, and postmortem documentation.
  • Create and maintain runbooks and engineering guides for deployments, upgrades, and troubleshooting.
  • Participate in architecture and design discussions to ensure Splunk and Cribl meet enterprise scaling and reliability needs.
  • Implement security and compliance controls including token rotation, TLS configurations, and secret management via Vault or GCP Secret Manager.
  • Perform disaster recovery testing and validate replication and failover processes across clusters.
  • Collaborate with governance teams to align on data retention, anonymization, and privacy requirements.
  • Support continuous improvement by analyzing ingestion efficiency, performance benchmarks, and automation opportunities.
  • Lead knowledge-sharing sessions and technical handoffs with Operations for newly deployed features or pipelines.

Benefits

  • Employee Assistance Program (EAP)
  • Corporate Discounts
  • Learning & Development platform, to include certification preparation content
  • Training, Education and Certification Assistance
  • Referral Bonus Program
  • Internal Mobility Program
  • Pet Insurance
  • Flexible Work Environment
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service