Senior API & Data Services Engineer

CeteraDallas, TX
4h$111,000 - $148,000Hybrid

About The Position

We are at the forefront of transforming the future of technology in the financial industry, and we seek curious, practical individuals to help us pave the way. Our team is not intimidated by taking calculated risks, as they relish a good challenge and are eager to engage in problem-solving. As a member of our team, you will work alongside like-minded experts in a culture that is deeply rooted in innovation and progression. Join us to be part of a transformative journey that can shape the industry's future. The Senior API & Data Services Engineer will design, build, and operate high-performance, secure, and scalable data access services within Celera's modernized event-driven data platform. This role focuses on developing API-first data services, caching layers, and access patterns that expose canonical operational data (transactions, positions, accounts, balances) to both operational and analytical consumers. This engineer plays a key role in decoupling data producers from consumers, enabling technology stack simplification while meeting the performance, security, and reliability expectations of a regulated financial services environment. The role requires strong experience with AWS-native services, distributed systems, and low-latency caching technologies. The ideal candidate is a senior, hands-on engineer who embraces AI-assisted development practices to improve delivery velocity, code quality, and platform reliability and brings strong experience in API-first platforms, distributed systems, and financial services where performance, security, and auditability are paramount.

Requirements

  • Bachelor’s degree in Computer Science, Data Engineering, or a related field; Master’s degree is preferred.
  • Minimum of 5–7 years of professional software or data engineering experience.
  • Demonstrated experience delivering production-grade APIs or data services.
  • Experience operating services with defined SLAs in regulated environments.
  • Hands-on expertise with AWS streaming technologies and cloud-based data engineering tools.
  • Proven track record of managing business-critical 24x7 operational data systems, including support and monitoring processes.
  • Financial services experience required, with preferred experience in broker-dealer and wealth management operations.
  • Strong experience building APIs (REST, GraphQL) and data services in distributed systems/ API-first platforms.
  • Proficiency with AWS cloud services (API Gateway, Lambda, ECS/EKS, DynamoDB, Aurora, Iceberg).
  • Hands-on experience with Redis / in-memory caching (ElastiCache preferred).
  • Strong understanding of API security, authentication, and authorization patterns.
  • Experience with CI/CD pipelines and modern DevOps practices.
  • Strong understanding of data modeling and canonical data access patterns.
  • Experience with API observability, logging, tracing, and error handling.
  • Strong experience with AWS streaming technologies (e.g., Kinesis, Kafka on AWS (Amazon MSK)).
  • Expertise in API documentation, meeting standards and best practices, and measurements (metrics, SLA’s, etc..)
  • Hands-on experience using AI-assisted development practices to achieve standardization and efficiency for scalability.
  • Experience and strong understanding with REST as well as GraphQL federation or schema management.
  • Experience suppoting operational consumers via API’s
  • Experience with event-driven & streaming architectures and their interaction with API layers.
  • Familiarity with data contracts and versioning strategies.
  • Exposure to supporting analytics and AI/ML consumers via governed APIs.
  • Exposure to data mesh or domain-oriented data access models.
  • Knowledge of emerging trends and technologies in real-time data processing and architecture.
  • Hands-on experience with data migration, modernization efforts, and cloud-native data solutions.
  • Strong software engineering fundamentals (design patterns, testing, code quality).
  • Experience with observability and production support for critical services.
  • Ability to diagnose and resolve complex performance and reliability issues.
  • Familiarity with data contracts, schema versioning, and backward compatibility strategies.
  • Strong understanding of the unique requirements of financial services data, including compliance, regulatory considerations, and transaction processing.
  • Experience in broker-dealer and wealth management operations preferred
  • Experience designing and managing support and monitoring frameworks for 24x7 operational data systems.
  • Proven ability to establish incident management processes and escalation protocols.
  • Strong background in optimizing ODS performance, availability, and reliability at scale.
  • Excellent problem-solving and analytical capabilities, especially in high-pressure environments.
  • Strong verbal and written communication skills, with the ability to convey technical concepts to business stakeholders effectively.
  • Collaborative mindset with a focus on shared platform success.

Nice To Haves

  • Experience and strong understanding with REST as well as GraphQL federation or schema management.
  • Experience suppoting operational consumers via API’s
  • Experience with event-driven & streaming architectures and their interaction with API layers.
  • Familiarity with data contracts and versioning strategies.
  • Exposure to supporting analytics and AI/ML consumers via governed APIs.
  • Exposure to data mesh or domain-oriented data access models.
  • Knowledge of emerging trends and technologies in real-time data processing and architecture.
  • Hands-on experience with data migration, modernization efforts, and cloud-native data solutions.
  • Financial Services Certifications are preferred such as SIE Certification, Series-99 or similar.
  • Technical Certifications preferred related to AWS Technology Stack (e.g. AWS Certified Solutions Architecture (preferred), AWS Certified Data Analytics, Open AP/ Swagger Training, REST/ API/ GraphQL Certifications)

Responsibilities

  • Design and implement API-first data services using REST and/or GraphQL patterns.
  • Build secure, performant access to canonical data produced by TPM and PCB platforms.
  • Implement versioned, contract-driven APIs to ensure backward compatibility and consumer stability.
  • Develop reusable data service components to reduce duplication across consumers.
  • Ensure API ‘s and Data Services are fully documented and cataloged for enterprise wide consumption and self-service.
  • Design and implement Redis-based caching strategies to support low-latency data access.
  • Tune APIs and caching layers to meet performance SLAs for near-real-time use cases.
  • Balance cache consistency, freshness, and cost in distributed environments.
  • Diagnose and resolve performance bottlenecks across API, cache, and backend layers.
  • Enforce contract-driven access patterns to ensure consistent schemas, versioning, and backward compatibility.
  • Leverage AWS streaming technologies (e.g., Kinesis, Kafka on AWS) to enable real-time data processing, particularly for broker-dealer and wealth management transactions.
  • Develop and deploy services using AWS-native technologies, including: API Gateway, Lambda, ECS/EKS DynamoDB, Aurora, S3 ElastiCache (Redis)
  • Build infrastructure-aware services that are resilient, scalable, and cost-efficient.
  • Participate in CI/CD pipelines, infrastructure-as-code, and environment automation.
  • Implement and manage role-based access control (RBAC) and entitlement-aware data access.
  • Ensure APIs and data services comply with security, privacy, and regulatory requirements.
  • Partner with Data Governance and Architecture teams to ensure consistent enforcement of enterprise standards.
  • Plan, prioritize, and manage multiple API engineering projects to ensure timely and within-budget delivery.
  • Collaborate with cross-functional teams to gather requirements specific to financial services, including regulatory and compliance needs.
  • Manage team resources effectively, balancing development and operational support needs.
  • Work closely with Data Architecture, Ingestion, Mastering, and Integrated Data teams.
  • Partner with application and analytics teams to onboard consumers efficiently.
  • Ensure alignment with enterprise security, governance, and data standards.
  • Translate consumer needs into scalable platform capabilities rather than one-off solutions.
  • Translate complex technical challenges into actionable insights for leadership and non-technical teams.
  • Leverage AI-assisted development tools (e.g., GitHub Copilot, code-generation and analysis tools) to: Accelerate feature delivery Improve code consistency and quality Reduce repetitive development tasks
  • Apply AI-supported testing, documentation, and refactoring techniques where appropriate.
  • Collaborate with peers to evolve best practices for responsible and effective AI usage in engineering workflows.
  • Ensure APIs and data services meet business critical availability (24x7) , performance, and reliability standards.
  • Implement observability through logging, metrics, and tracing.
  • Participate in incident response, root cause analysis, and continuous improvement efforts.
  • Contribute to runbooks, operational documentation, and support readiness.
  • Ensure SLAs, availability targets, and performance benchmarks are consistently met.

Benefits

  • competitive performance-based bonus
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service