L3Harris Technologies-posted 2 months ago
Full-time
Melbourne, FL
Transportation Equipment Manufacturing

L3Harris is dedicated to recruiting and developing high-performing talent who are passionate about what they do. Our employees are unified in a shared dedication to our customers' mission and quest for professional growth. L3Harris provides an inclusive, engaging environment designed to empower employees and promote work-life success. Fundamental to our culture is an unwavering focus on values, dedication to our communities, and commitment to excellence in everything we do. L3Harris Technologies is the Trusted Disruptor in the defense industry. With customers' mission-critical needs always in mind, our employees deliver end-to-end technology solutions connecting the space, air, land, sea and cyber domains in the interest of national security.

  • Design, build, and maintain robust data pipelines to ensure reliable data flow across the enterprise.
  • Maintain data pipeline schedules, orchestrate workflows, and monitor the overall health of data pipelines to ensure continuous data availability.
  • Create, update, and optimize data connections, datasets, and transformations to align with business needs.
  • Troubleshoot and resolve data sync issues, ensuring consistent and correct data flow from source systems.
  • Collaborate with cross-functional teams to uphold data quality standards and ensure accurate data is available for use.
  • Utilize Palantir Foundry to establish data connections to source applications, extract and load data, and design complex logical data models that meet functional and technical specifications.
  • Develop and manage data cleansing, consolidation, and integration mechanisms to support big data analytics at scale.
  • Build visualizations using Palantir Foundry tools and assist business users with testing, troubleshooting, and documentation creation, including data maintenance guides.
  • Bachelor's Degree and minimum 2 years of prior relevant experience.
  • Graduate Degree and a minimum of 0 to 2 years of prior related experience.
  • In lieu of a degree, minimum of 6 years of prior related experience.
  • 2+ years experience with designing and developing data pipelines in PySpark, Spark SQL, SQL or Code Build.
  • 2+ years experience in building and deploying data synchronization schedules and maintaining data pipelines using Palantir Foundry.
  • 2+ years of experience with Data Pipeline development or ETL tools such as Palantir Foundry, Azure Data Factory, SSIS, or Python.
  • 2+ years of experience in Data Integration.
  • Strong understanding of Business Intelligence (BI) and Data Warehouse (DW) development methodologies.
  • Hands-on experience with the Snowflake Cloud Data Platform, including data architecture, query optimization, and performance tuning.
  • Proficiency in Python, PySpark, Pandas, Databricks, JavaScript, or other scripting languages for data processing and automation.
  • Experience with other ETL tools such as Azure Data Factory (ADF), SSIS, Informatica, or Talend is highly desirable.
  • Familiarity with connecting and extracting data from various ERP applications, including Oracle EBS, SAP ECC/S4, Deltek Costpoint, and more.
  • Experience with AI tools such as OpenAI, Palantir AIP, Snowflake Cortex or similar.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service