At American Express, our culture is built on a 175-year history of innovation, shared values and Leadership Behaviors, and an unwavering commitment to back our customers, communities, and colleagues. As part of Team Amex, you'll experience this powerful backing with comprehensive support for your holistic well-being and many opportunities to learn new skills, develop as a leader, and grow your career. Here, your voice and ideas matter, your work makes an impact, and together, you will help us define the future of American Express. American Express Travel Related Services Company, Inc. seeks Senior Data Engineers to work with product teams to understand business data requirements, identify data needs and data sources to create data architecture, and support changes and implementation. Document data requirements and data stories, and maintain data models to ensure seamless integration into existing data architectures. Create and maintain information about stored data and understand database requirements and translate those requirements into physical database design. Build and enhance database design and infrastructure that supports the business portfolio. Perform database design review, support database testing, and provide production environment support for database systems and processes. Design database features for ongoing sprints and monitor database requirements based on industry trends, new technologies, known defects, and issues. Position requires a Master’s degree in Computer Science, Engineering, Information Systems, or a related STEM field, and 2 years of big data engineering experience. Experience must include 2 years of experience with each of the following: designing and building large-scale distributed data applications using Apache Spark; developing databases and schemas on Cassandra, including data modeling for high-throughput and low-latency use cases; implementing distributed storage and retrieval using HDFS; orchestrating and scheduling workflows using Oozie; automating deployments and managing configurations using Ansible; tuning the performance of Spark jobs, Cassandra queries, and SQL workloads; conducting code reviews and managing source control using GitHub; applying Agile methodologies, including sprint planning, iterative development, and continuous delivery; using containerization and CI/CD tools, including Docker, Kubernetes, and Jenkins; applying data modeling and governance practices; developing and optimizing SQL; using Splunk for log management; applying the Spock Framework for unit testing; working with data streaming platforms, including Apache Kafka; monitoring Spark applications and infrastructure using Prometheus and Grafana; and writing shell scripts for test automation. Telecommuting is available up to 2 days per week. Job Location: Phoenix, AZ
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Career Level
Mid Level