Luxer One-posted 1 day ago
Full-time • Mid Level
Remote
11-50 employees

We are the Luxer One Data Engineering Team, an Agile group responsible for building, maintaining, and securing our data platform and pipelines across multiple cloud environments. This position plays a critical role in enabling the company to scale our analytics and insights capabilities, while safeguarding sensitive data. If you are a Senior Data Engineer with a passion for building robust, secure, and compliant data solutions, have strong SQL and distributed data processing experience, and know your way around cloud infrastructure, then we would love to talk with you. Come join the Luxen Team! Our stack is based on Amazon Web Services (AWS) and Google Cloud Platform (GCP), utilizing technologies like AWS Glue, Apache Spark, Redshift, and BigQuery with Python and Scala to deliver scalable, secure, and high-performance data solutions. As a Senior Data Engineer you will use your knowledge of core data engineering principles, distributed data processing, and cloud infrastructure to design and implement secure, robust data pipelines, support analytics workloads, and ensure the reliability, scalability, and compliance of our data ecosystem.

  • Design, develop, and maintain scalable and secure data pipelines and ETL processes using AWS Glue, Spark, and other modern data tools.
  • Architect, implement, and optimize data warehouse solutions in Amazon Redshift and Google BigQuery to support analytics and business intelligence.
  • Collaborate with software engineering, analytics, and product teams to ensure data models meet business requirements and security standards.
  • Author and review technical documentation for data pipelines, schemas, workflows, and security controls.
  • Implement data quality, validation, and monitoring processes to ensure reliable and accurate data.
  • Apply data security and privacy best practices (encryption at rest/in transit, IAM roles, access controls, and data masking) across all data storage and movement.
  • Work closely with security teams to ensure pipelines adhere to regulatory and compliance requirements (SOC 2, GDPR, CCPA, etc.).
  • Support and mentor junior data engineers in best practices, including secure coding and data handling.
  • Participate in on-call and incident response activities for critical data pipelines.
  • 5+ years of experience in data engineering or related field.
  • Strong SQL skills with demonstrable experience building complex queries and optimizing performance on large datasets.
  • 3+ years hands-on experience building data pipelines with AWS Glue, Apache Spark, or similar ETL frameworks.
  • 3+ years production experience with Amazon Redshift, Google BigQuery, or other large-scale data warehouses.
  • Proficiency in Python and Scala for data processing and automation.
  • Strong understanding of cloud infrastructure and security concepts (AWS and/or GCP), including encryption, storage, networking, IAM, and access control.
  • Experience with data modeling, schema design, and best practices for secure data warehousing.
  • Experience implementing data security controls such as encryption, key management, access auditing, and data masking.
  • Experience with data privacy regulations (GDPR, CCPA, or SOC 2 controls) and aligning pipelines to compliance requirements.
  • Advanced troubleshooting skills including analyzing logs, application monitoring, and driving problem resolution.
  • Excellent verbal and written communications skills.
  • Bachelor’s degree in Computer Science, Engineering, Data Science, or related field or equivalent combination of education and experience.
  • comprehensive medical, dental, and vision coverage
  • a 401(k) plan with employer match to help you invest in your future
  • tuition reimbursement to keep your career moving forward
  • paid vacation and sick time
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service