GM-posted about 13 hours ago
Full-time • Mid Level
Hybrid • Austin, TX
5,001-10,000 employees

This role is categorized as hybrid. This means the successful candidate is expected to report to Austin IT Innovation Center three times per week, at minimum [or other frequency dictated by the business if more than 3 days]. What You’ll Do Communicates and maintains Master Data, Metadata, Data Management Repositories, Logical Data Models, Data Standards Create and maintain optimal data pipeline architecture You will assemble large, complex data sets that meet functional / non-functional business requirements You will identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. Build industrialized analytic datasets and delivery mechanisms that utilize the data pipeline to deliver actionable insights into customer acquisition, operational efficiency and other key business performance metrics Work with business partners on data-related technical issues and develop requirements to support their data infrastructure needs Create highly consistent and accurate analytic datasets suitable for business intelligence and data scientist team members

  • Communicates and maintains Master Data, Metadata, Data Management Repositories, Logical Data Models, Data Standards
  • Create and maintain optimal data pipeline architecture
  • Assemble large, complex data sets that meet functional / non-functional business requirements
  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
  • Build industrialized analytic datasets and delivery mechanisms that utilize the data pipeline to deliver actionable insights into customer acquisition, operational efficiency and other key business performance metrics
  • Work with business partners on data-related technical issues and develop requirements to support their data infrastructure needs
  • Create highly consistent and accurate analytic datasets suitable for business intelligence and data scientist team members
  • Bachelor’s degree in Computer Science, Software Engineering, or related field, or equivalent experience
  • 7 or more years with object-oriented/object function scripting languages: Python, Java, C++, Scala, etc.
  • At least 3 years of hands on experience with Big Data Tools: Hadoop, Spark, Kafka, etc.
  • Mastery with databases - Advanced SQL and NoSQL databases, including Postgres and Cassandra
  • Data Wrangling and Preparation: Alteryx, Trifacta, SAS, Datameer
  • Stream-processing systems: Storm, Spark-Streaming, etc.
  • Ability to tackle problems quickly and completely
  • Ability to identify tasks which require automation and automate them
  • A demonstrable understanding of networking/distributed computing environment concepts
  • Ability to multi-task and stay organized in a dynamic work environment
  • Data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.
  • AWS cloud services: EC2, EMR, RDS, Redshift
  • GM offers a variety of health and wellbeing benefit programs.
  • Benefit options include medical, dental, vision, Health Savings Account, Flexible Spending Accounts, retirement savings plan, sickness and accident benefits, life insurance, paid vacation & holidays, tuition assistance programs, employee assistance program, GM vehicle discounts and more.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service