Software Engineer

TrimbleWestminster, CO
86d$144,584 - $195,252

About The Position

Trimble is an industrial technology leader transforming the way the world works by delivering solutions that connect the physical and digital worlds. Core technologies in positioning, modeling, connectivity, and data analytics enable customers to improve productivity, quality, safety, transparency, and sustainability. From purpose-built products to enterprise lifecycle solutions, Trimble is transforming industries such as agriculture, construction, geospatial, and transportation. The Enterprise Data Operations team is the central nervous system of Trimble's data ecosystem. Our mission is to empower business units across the globe with trusted, high-quality data to drive analytics, business intelligence, and strategic decision-making. We are responsible for the architecture, development, and governance of the enterprise data platform that powers Trimble's growth. We are seeking a Staff Data Engineer to join our team at our corporate headquarters in Westminster, Colorado. This is a pivotal, senior-level role for a technical thought leader passionate about building robust, scalable, and elegant data solutions. You will not just be building pipelines; you will be setting the technical direction, mentoring engineers, and solving our most complex data challenges. As a technical expert, you will be instrumental in designing and delivering the next generation of our enterprise data products.

Requirements

  • Bachelor’s degree in Computer Science, Engineering, Information Systems, or a related technical field.
  • 8+ years of progressive experience in data engineering, software engineering, or a related role, with a demonstrated track record of delivering complex, enterprise-scale data products.
  • Expert-level proficiency in SQL and at least one programming language, preferably Python.
  • Extensive hands-on experience with a major cloud platform (AWS, Azure, or GCP) and its data services (e.g., S3, Redshift, Glue, Lambda; ADLS, Synapse, Data Factory; BigQuery, Cloud Storage).
  • Deep expertise with modern big data processing frameworks like Apache Spark.
  • Proven experience designing and building large-scale data warehouses and data lakes from the ground up, with a deep understanding of data modeling techniques (e.g., Kimball, Inmon, Data Vault).
  • Demonstrated ability to lead technical projects, influence architecture decisions, and mentor other engineers.
  • Excellent problem-solving skills and the ability to navigate ambiguity in a fast-paced environment.

Nice To Haves

  • Master's degree in Computer Science or a related field.
  • Experience with modern data stack technologies and orchestration tools such as dbt, Airflow, or Prefect.
  • Experience with streaming data technologies like Kafka, Kinesis, or Spark Streaming.
  • Knowledge of modern data architecture concepts like Data Mesh or Data Fabric.
  • Experience with Infrastructure as Code (e.g., Terraform, CloudFormation) and CI/CD best practices for data pipelines.
  • Familiarity with containerization technologies like Docker and Kubernetes.
  • Relevant cloud certifications (e.g., AWS Certified Data Analytics, Google Professional Data Engineer).

Responsibilities

  • Lead the design and evolution of our enterprise data platform, ensuring it is scalable, reliable, and secure.
  • Champion and implement best practices in data architecture, data modeling, and data engineering.
  • Architect, build, and optimize complex, large-scale ETL/ELT data pipelines from a wide variety of source systems using modern big data technologies on cloud platforms (AWS, Azure, GCP).
  • Act as a technical thought leader and subject matter expert for data engineering within the organization.
  • Mentor and guide junior and mid-level data engineers, fostering a culture of technical excellence and innovation through code reviews, design sessions, and knowledge sharing.
  • Develop robust, reusable data processing frameworks and components.
  • Write clean, high-quality, and maintainable code in Python and SQL.
  • Profile and tune data processing jobs to improve performance and reduce cost.
  • Partner closely with data scientists, BI developers, data analysts, and business stakeholders to understand their data needs and translate complex business requirements into scalable technical solutions.
  • Drive automation in data quality, data governance, and platform operations.
  • Troubleshoot and resolve complex data integrity and performance issues across the enterprise data landscape.

Benefits

  • Medical, Dental, Vision, Life, Disability, Time off plans and retirement plans.
  • Tax savings plans for health, dependent care and commuter expenses.
  • Paid Parental Leave.
  • Employee Stock Purchase Plan.

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Senior

Education Level

Bachelor's degree

Number of Employees

1,001-5,000 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service