Software Engineer II, Data Engineering

Brain CorpSan Diego, CA
$130,000Onsite

About The Position

As a member of our Software Engineering team, the Software Engineer II, Data Engineering will play a key role in designing and maintaining the data infrastructure that powers the BrainOS platform. This role requires a solid foundation in data engineering concepts and technologies; with experience in building scalable data pipelines and ensuring the integrity and performance of data systems. The Software Engineer II, Data Engineering will work independently on mid-sized projects and collaborate with senior engineers to tackle larger initiatives.

Requirements

  • BS or MS in Computer Science or applicable engineering discipline
  • 2-5 years of proven software development experience, with at least 2 of those years focused on data engineering
  • Proficiency in SQL as well as one or more programming languages (Python, Go, or TypeScript)
  • Experience with data warehousing (BigQuery, Snowflake, Firestore, MySQL, PostgreSQL)
  • Familiarity with stream processing frameworks (Apache Beam, Pub/Sub, Spark Streaming)
  • Understanding of data governance, security, and compliance best practices
  • Strong problem-solving skills and ability to work independently on projects
  • Familiarity using Generative AI tools to enhance development workflows, such as code generation, data exploration, and documentation support
  • Effective communication skills demonstrated by effective written and verbal communication; with an ability to articulate technical concepts to both technical and non-technical stakeholders

Nice To Haves

  • Experience with machine learning models and data science methodologies
  • Experience with Google Cloud and their data ecosystem
  • Familiarity with BI tools (e.g., Tableau, Power BI) and data frameworks (e.g., Hadoop, Spark)
  • Experience with infrastructure as code (e.g., Terraform, Pulumi) and containerization and orchestration tools (e.g., Docker, Kubernetes)

Responsibilities

  • Design and Develop Data Pipelines: Implement robust, scalable pipelines for processing structured and unstructured data
  • Data Architecture & Modeling: Contribute to the design of complex data models and optimize storage solutions in support of engineering and business objectives
  • Optimize Performance and Scalability: Enhance the efficiency and scalability of data pipelines and storage systems by identifying bottlenecks, and configuring cluster resources
  • Collaborate and Support: Collaborate with data analysts, data scientists, and other business teams to support data-related technical issues and support their data infrastructure needs
  • Data Security & Compliance: Implement security best practices, including encryption and access control policies, while maintaining data integrity and compliance with quality standards
  • Incident Management: Monitor and troubleshoot data pipeline failures and reliability issues
© 2026 Teal Labs, Inc
Privacy PolicyTerms of Service