New York Post-posted about 1 month ago
$90,000 - $105,000/Yr
Full-time • Entry Level
Hybrid • New York, NY
501-1,000 employees

The New York Post provides readers with the best in News, Sports, Pop Culture and Entertainment – with signature wit, irreverence and authority averaging 90 million unique viewers a month. Over the past 223 years, The Post has evolved into a multi-platform media company spanning print, digital, video, audio, app, television and commerce. Today, the New York Post Digital Network is one of the most influential voices in the industry, reaching over 80 million unique users each month across our expansive, ever-growing portfolio. Anchored by NYPost.com, PageSix.com, and Decider.com, our digital brands deliver must-read breaking news, sports, entertainment, pop culture, and lifestyle coverage with the same wit, edge, and energy that define The Post. With innovation at our core, we’ve built a true multi-platform ecosystem — spanning web, mobile apps, video, audio, social, print, TV, and commerce. From viral headlines and exclusive stories to original series, podcasts, and live events, we’re engaging audiences where they are — and always on the pulse of what’s next. We’re looking for a Software Engineer to join our technology team and work on the systems that power The Post’s digital future. This is a hands-on engineering role with a focus on infrastructure services, data pipelines, personalization systems, and custom APIs across AWS and GCP. This engineer will build and maintain the foundational services that enable our newsroom, data science, and product teams to deliver cutting-edge digital experiences at scale. You’ll work alongside senior engineers, product managers, and data teams, contributing to high-performance systems.

  • Design, build, and operate cloud-based APIs supporting specialized needs of our consumer websites and native mobile apps.
  • Work daily with compute services, query services, storage services, and big data services with technologies like Vertex AI, Lambdas, DynamoDB, Cloud Functions, S3, Kubernetes, Glue, and BigQuery.
  • Navigate a stack from Javascript in a browser down through DNS, networking, applications, containers and operating systems or serverless services, and into dependent service tiers.
  • Develop and maintain data ingestion pipelines for real-time and batch processing of large-scale content and event data.
  • Implement and scale custom APIs to support editorial workflows, personalization, analytics, and user-facing applications.
  • Write clean, testable, and maintainable code (Python or Node.js preferred).
  • Collaborate with product, editorial, and data science teams to deliver infrastructure and services that meet business needs.
  • Monitor, optimize, and improve the reliability, security, and cost efficiency of cloud-based systems.
  • Write and present ideas clearly. Document functional requirements for both technical and non-technical audiences.
  • Participate in code reviews, architecture discussions, and agile sprint ceremonies.
  • MS or BS in computer science, or related field.
  • 2+ years of professional software engineering experience.
  • Proficiency in a modern programming language such as Python, Node.js, or Go.
  • Working knowledge of common AWS and/or GCP services .
  • Familiarity with REST and GraphQL APIs, authentication/authorization, and API best practices.
  • Firm grasp of the tools of the trade including IDEs, Git, JIRA, and agile methodologies.
  • Comfort with code reviews, writing unit tests, QA and UAT processes, and exposure to end-to-end testing frameworks.
  • Strong problem-solving skills, attention to detail, and an eagerness to learn.
  • Attention to detail, thoroughness, and a proactive approach to problem-solving. Highly organized with the ability to manage multiple priorities in a fast-paced environment.
  • Familiarity with content-management systems, media, news, publishing, video technology, ad technology, or other high-traffic consumer-facing industries.
  • Exposure to personalization and recommendation systems.
  • Comfortable with Linux administration, shell scripting, ssh, crontabs, etc.
  • Hands-on experience with generative-AI models, LLM prompting, and/or machine learning
  • Experience with monitoring, logging, dashboard, and alarming tools (e.g., CloudWatch, Datadog, Splunk, etc.)
  • Experience building and deploying data pipelines (e.g., Airflow, Dataflow, Glue, or similar tools)
  • Knowledge of data warehouse technologies (BigQuery, Snowflake, Redshift, etc.)
  • Understanding of CI/CD workflows and containerization (Docker, Kubernetes a plus)
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service