Hitachi-posted 9 months ago
Full-time • Mid Level
Onsite • Santa Clara, CA
Electrical Equipment, Appliance, and Component Manufacturing

Hitachi's R D Center, located in Santa Clara, CA, serves as a pivotal technology hub to support Hitachi's global Social Innovation business. Our focus spans original and applied research, technology innovation, intellectual property creation, and solving real-world challenges. We collaborate closely with Hitachi's product and business divisions, contributing to Hitachi's IoT and AI business across a variety of industries, with a special focus on optimized manufacturing. We are looking for a Software Engineer with expertise in databases and data management to join our dynamic team. In this role, you will contribute to our Industrial Digital Transformation solution portfolio by developing and testing a unified data layer for heterogeneous data systems. You'll get a chance to engage in real-world deployments in manufacturing facilities at scale. This position is a perfect match for a motivated programmer who is passionate about empowering society through the tremendous potential of Industrial IoT and AI.

  • Contribute to the ongoing software development of a resilient and efficient unified data layer for industrial AI/ML applications.
  • Collaborate within a cross-disciplinary team that includes software, network, sensing, and domain experts, building upon existing designs and architectures to add unique features and differentiators.
  • An MS in Computer Science, Computer Engineering, or a related field, accompanied by at least 2 years of experience in software development.
  • Experience in SQL (writing and optimizing queries), Python (for scripting, automation, and data processing) is required.
  • Experience with at least one SQL, NoSQL, or cloud database (e.g., PostgreSQL, MySQL, MongoDB, Snowflake, DynamoDB) is required.
  • Experience with distributed query engines such as Trino, Dremio is required.
  • Experience with version control and CI/CD pipelines such as Git are required.
  • Experience with query optimization (such as Trino CBO, Dremio Reflections) is a strong plus.
  • Experience with metadata management tools such as Apache Atlas or similar is a strong plus.
  • Experience with data virtualization esp. with Dremio or Trino is a strong plus.
  • Experience with ETL and building data pipelines and associated software is a plus.
  • Experience in big data and distributed computing such as Apache Spark (PySpark), Dask etc. is a plus.
  • Experience in cloud platforms and cloud-native services, particularly in AWS or Azure, is a plus.
  • Experience in Java/Scala for big data performance optimization is a plus.
  • Excellent interpersonal and communication skills.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service