APEX Fintech Services-posted 2 days ago
Full-time • Mid Level
Hybrid • Austin, TX
1,001-5,000 employees

At Apex Fintech Solutions, we're transforming how businesses leverage data to drive strategic decisions in the fintech and wealth tech community. Our Risk & Regulatory Data Products team is at the heart of this mission, building and maintaining scalable cloud infrastructure that powers our analytics capabilities and supports our clearing and custody services. As a Senior Data Engineer on our team, you will design, build, and optimize robust, cloud-native data pipelines that transform raw financial data into valuable business insights. This role focuses initially on building data products that enable reporting and analytics for our Trade Processing System, in addition to building out performant data systems for a wide variety of use cases in the Margin, Risk, Regulatory Tech, and Securities Lending domains. You'll work closely with these cross-functional teams, along with Data Experience and Analytics to understand requirements and implement scalable solutions that support our growing data needs. This role offers an exciting opportunity to work with cutting-edge cloud technologies while solving complex data challenges in a fast-paced financial services environment.

  • Data Pipeline Development Design, develop, and maintain scalable ETL/ELT pipelines for our enterprise data warehouse and data lake
  • Implement robust data processing systems that support our Data Products
  • Build systems to handle real-time and batch data updates using change data capture (CDC) tools such as HVR, Dataflow, Datastream
  • Build and extend existing systems that process, store, and move streaming and batch data from SQL-based sources to centralized GCP storage and/or cloud data warehousing platforms
  • Build complex views that enable convenient and intuitive access to datasets
  • Quality & Optimization Implement comprehensive data quality checks and monitoring to ensure accuracy and reliability
  • Optimize existing data workflows for improved performance, cost efficiency, and scalability
  • Ensure system quality, stability, and maintainability across all solutions
  • Build and maintain data models that support transactional workloads, analytics and reporting needs
  • Support and Maintenance Optimize existing data workflows for improved performance and efficiency
  • Build, maintain, and extend data models that support realtime transactions, analytics and reporting needs
  • Work with a continuous improvement mindset
  • Provide Level 2 support and be part of on-call rotations
  • Collaboration & Communication Communicate effectively with teams across many product domains
  • Participate in code reviews and contribute to best practices for data engineering
  • Document architectural diagrams and technical specs for data processes and workflows
  • Compliance & Security Ensure security and privacy of data in accordance with financial-sector compliance requirements
  • Implement appropriate access controls and data governance practices
  • Support data operations across development, testing, and production environments
  • Perform regular audits and attestations for data protection and compliance
  • Leadership Gather requirements from stakeholders and translate them into technical solutions
  • Advocate for architectural designs, explain tradeoffs, and recommend optimal paths forward
  • Serve as technical lead on a team of data engineers
  • Coordinate across Data Product teams to align tool usage and encourage best practices
  • Bachelor's degree in Computer Science, Information Systems, Data Engineering, or related field
  • 5+ years of experience in data engineering, cloud data engineering, or similar roles
  • Cloud Platforms: Expert-level knowledge of Google Cloud Platform (GCP); GCP Data Engineer certification strongly preferred
  • Programming: Expert proficiency in SQL and Python; familiarity with Java is a plus
  • Data Processing: Experience with ETL/ELT tools and frameworks (Apache Airflow, AWS Glue, dbt, GCP Dataflow)
  • Databases: Strong experience with relational database systems (PostgreSQL, SQL Server) and cloud data warehouses such as Bigquery, Snowflake.
  • Infrastructure: Experience with CI/CD systems, Infrastructure as Code (Terraform), and Kubernetes
  • Data Replication: Experience with change data capture (CDC) tools like HVR, Datastream, Dataflow, Kinesis
  • Version Control: Proficiency with Git and modern CI/CD development practices
  • AI & Machine Learning Skills Prompt Engineering: Demonstrated experience in crafting effective prompts for various AI models and use cases
  • Large Language Models (LLMs): Hands-on exposure with multiple LLMs (e.g., GPT-4, Claude, Gemini, open-source models)
  • AI-Assisted Development: Proven experience using Agentic AI tools to co-author development work, including code generation, debugging, and optimization
  • AI Integration: Understanding of how to integrate AI capabilities into data processing workflows and automation
  • Strong understanding of data warehousing concepts and dimensional modeling
  • Knowledge of distributed systems and data architecture patterns
  • Understanding of batch vs. streaming data processing
  • Experience with data modeling and schema design
  • Strong analytical and problem-solving skills
  • Experience in financial services strongly preferred
  • healthcare benefits (medical, dental and vision, EAP)
  • competitive PTO
  • 401k match
  • parental leave
  • HSA contribution match
  • paid subscription to the Calm app
  • generous external learning and tuition reimbursement benefits
  • hybrid work schedule for most roles that allows employees to have the flexibility of working from home and one of our primary offices
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service