Design, implement, and maintain real-time data ingestion and ETL pipelines. Manage and maintain Apache NiFi clusters running as containerized workloads in Kubernetes. Manage other components of our data infrastructure, such as Apache Kafka, as required. Analyze new data sources and integrate them into our data pipeline. Collaborate with other teams to ensure successful integration of data infrastructure with existing systems. Experience working with Linux-based servers and systems. Experience with at least one modern database (e.g., PostgreSQL), ETL (e.g., Apache NiFi), or Data Science (e.g., MapReduce) technology. Ability to work both independently and in a collaborative team environment. Experience with AWS services (e.g., S3). Experience with containerizing applications using Docker and deploying them to a container orchestration platform (e.g., Kubernetes). Experience with PostgreSQL. Experience with Apache Kafka. Experience with GeoServer or Open Geospatial Consortium (OGC) standards. Experience with monitoring and logging tools (e.g., Prometheus, Grafana, Elasticsearch, Kibana). Experience with CI/CD tools (e.g., Helm, Harbor, ArgoCD, Jenkins). Experience with Agile methodologies. Bachelor's Degree in computer science, data science, engineering, math, statistics, operations research, or related field. Current TS/SCI eligibility, with a current CI Poly. U.S. Citizenship is required for this position. DoD 8570 IAT level 3 (CompTIA Security+) compliant. AWS certification (e.g., AWS Solutions Architect Associate or Professional).
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Entry Level