Apps Dev Tech Lead Analyst - Vice President

CitiIrving, TX
Onsite

About The Position

As a key member of the global development team, this role involves innovating and developing well-architected technical solutions by partnering with project managers, business stakeholders, and senior managers. The position requires consulting with users and other technology groups, providing advanced programming insights, and driving cross-functional collaboration to achieve strategic organizational goals. A core aspect is proactively identifying and implementing system enhancements for new products and process improvements. The role also leads the resolution of high-impact problems and critical projects through in-depth evaluation of business processes, system architectures, and industry standards, employing advanced analytical thinking to define issues, uncover root causes, and develop sustainable solutions. Furthermore, the Tech Lead Analyst serves as a subject matter expert in application programming, ensuring designs adhere to architectural blueprints and strategic technology roadmaps, and enforcing robust standards for coding, testing, debugging, and implementation. This role includes significant mentorship and talent development responsibilities, acting as a trusted advisor and coach for mid-level developers and analysts, providing technical guidance, and conducting code reviews. Operational excellence, autonomy, ownership of critical initiatives, and proactive risk management with a commitment to regulatory compliance are also key. The position specifically involves designing, developing, and maintaining robust, scalable, and high-performance data pipelines using PySpark, collaborating with data scientists and stakeholders, optimizing Spark jobs, and implementing data quality checks. It also encompasses designing and optimizing data architectures, pipelines, and models, building and deploying efficient ETL/ELT processes with Python and PySpark, and implementing best practices for data quality, governance, and security. Monitoring, troubleshooting, and optimizing data pipeline performance are also critical.

Requirements

  • 6-10 years of progressive experience in systems analysis and programming of software applications, with a proven track record of implementing successful projects.
  • Strong proficiency in Java application technologies, including deep experience with TDD (Test-Driven Development), Spring framework, and Microservices architecture.
  • Extensive hands-on experience with PySpark and advanced Python programming skills.
  • Proven experience with Big Data ecosystems, including Cloudera and/or Data Bricks.
  • Hands-on experience with distributed query engines like Starburst (Trino/Presto).
  • Proficient in designing and managing complex workflows using scheduling tools, particularly Apache Airflow.
  • Strong expertise in SQL and experience with relational and non-relational databases.
  • Excellent knowledge of algorithms and data structures, design patterns.
  • Experience in systems analysis and programming of software applications.
  • Strong Java experience: Java core, collections, concurrency, streams.
  • Frameworks and APIs: Spring (Core, Batch, Integration, MVC, Boot, Data), Hibernate, Jackson, JAX RS, JPA, JAXB.
  • Messaging: JMS, Kafka.
  • Testing: JUnit, Mocking frameworks (Mockito, Power Mock).
  • Experience in performance enhancements using parallel processing, multithreading.
  • Understanding locking/synchronization.
  • Understanding Docker and Kubernetes.
  • Experience in RESTful API development and integration, deployment framework and source control experience such as Git.
  • Solid understanding and experience with SQL.
  • Proficiency in Linux environments.
  • Experience with job scheduling.
  • Working knowledge of project management techniques and methods, with a focus on agile methodologies.
  • Ability to thrive in a fast-paced environment, manage multiple deadlines, and adapt quickly to evolving requirements and priorities.
  • A strong team player with excellent communication skills, capable of working effectively with global teams to deliver integrated solutions.
  • Bachelor’s degree/University degree in Computer Science, Engineering, or a related field, or equivalent practical experience.

Nice To Haves

  • Experience with distributed caches like Apache Gem fire.
  • Experience with real-time data streaming and processing using PySpark Structured Streaming.
  • Knowledge of machine learning concepts and MLOps practices, especially integrating ML workflows with PySpark.
  • Familiarity with data visualization tools (e.g., Tableau, Power BI).
  • Contributions to open-source data projects.
  • Strong experience with SQL and NoSQL databases (e.g., PostgreSQL, MySQL, MongoDB, Cassandra).
  • Experience with AI development tools (eg. Copilot, Devin & Claude).
  • Prior experience or a keen interest in the financial services industry.
  • Ability to work under pressure and manage deadlines or unexpected changes in expectations or requirements.
  • Experience of working in fast paced environment.
  • Flexible and adaptive, team player.
  • Excellent analytical and communication, interpersonal skills.

Responsibilities

  • Partner closely with project managers, business stakeholders, and senior managers to translate complex business requirements into well-architected technical solutions.
  • Consult with users and other technology groups, providing advanced programming insights and support.
  • Drive cross-functional collaboration with diverse management teams to ensure seamless integration of functions, aligning efforts to achieve strategic organizational goals.
  • Proactively identify, define, and implement necessary system enhancements to facilitate the successful deployment of new products and process improvements.
  • Lead the resolution of high-impact problems and critical projects through in-depth evaluation of intricate business processes, complex system architectures, and relevant industry standards.
  • Employ advanced analytical and interpretive thinking to define issues, uncover root causes, and develop innovative, sustainable solutions.
  • Consult with users, clients, and other technology groups on issues, and recommend programming solutions.
  • Analyze complex technical and business challenges, and propose innovative solutions that enhance system functionality and business processes.
  • Serve as a subject matter expert in application programming, ensuring that all application designs rigorously adhere to the overall architectural blueprint and strategic technology roadmap.
  • Leverage an advanced understanding of system flow to develop and enforce robust standards for coding, testing, debugging, and implementation across development teams.
  • Act as a trusted advisor and coach for mid-level developers and analysts, providing guidance, fostering skill development, and judiciously allocating work to maximize team potential and project success.
  • Provide technical guidance, mentorship, and code reviews to junior data engineers, fostering a culture of excellence and continuous improvement.
  • Ensure adherence to best practices and essential procedures.
  • Operate with a high degree of independence and judgment, taking ownership of critical initiatives and driving them to successful completion.
  • Proactively assess and manage technical risks, demonstrating a strong commitment to regulatory compliance, ethical judgment, and transparent reporting of control issues.
  • Design, develop, and maintain robust, scalable, and high-performance data pipelines using PySpark.
  • Collaborate with data scientists, analysts, and business stakeholders to understand data requirements and deliver high-quality data solutions.
  • Optimize and tune Spark jobs for performance and efficiency.
  • Implement data quality checks and ensure data integrity across all data pipelines.
  • Design, develop, and optimize data architectures, pipelines, and data models to support various business needs, including analytics, reporting, and machine learning.
  • Build, test, and deploy highly scalable and efficient ETL/ELT processes using Python and PySpark to ingest, transform, and load data from diverse sources into data warehouses and data lakes.
  • Develop and optimize complex data transformations using PySpark.
  • Implement best practices for data quality, data governance, and data security to ensure the integrity, reliability, and privacy of our data assets.
  • Monitor, troubleshoot, and optimize data pipeline performance, ensuring data availability and timely delivery, particularly for PySpark jobs.

Benefits

  • medical, dental & vision coverage
  • 401(k)
  • life, accident, and disability insurance
  • wellness programs
  • paid time off packages, including planned time off (vacation), unplanned time off (sick leave), and paid holidays

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Senior

Number of Employees

5,001-10,000 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service