Software Engineer, Ads Integrity

OpenAISan Francisco, CA
1d

About The Position

We’re building the Ads Integrity team to ensure OpenAI can grow advertising in a way that is safe, trusted, and sustainable—for users, advertisers, and the business. This team sits at the intersection of rapid revenue growth and responsible platform stewardship, designing systems that enable ads to scale without compromising user trust or safety. It’s critical to us that our Ads product be built in a way that corresponds to our Ads principles, and this team is key to that. About the Role As a Software Engineer on Ads Integrity, you’ll work on foundational integrity problems across advertiser identity, ad content, placement, and landing-page safety. You’ll help define how ads responsibly show up in products like ChatGPT, partnering closely with Ads, Integrity, and Personalization teams to put the right constraints and protections in place as we move fast. This is a high-impact role on a 0 → 1 team with significant ownership and influence over how ads evolve at OpenAI.

Requirements

  • Have at least 6 years of professional software engineering experience.
  • Have experience setting up and maintaining production backend services and data pipelines.
  • Comfortable operating as a generalist across distributed systems, data pipelines, and applied ML integrations.
  • Have a humble attitude, an eagerness to help your colleagues, and a desire to do whatever it takes to make the team succeed.
  • Are self-directed and enjoy figuring out the best way to solve a particular problem.
  • Own problems end-to-end, and are willing to pick up whatever knowledge you're missing to get the job done.
  • Care about AI Safety in production environments and have the expertise to build software systems that defend against abuse.
  • Motivated by working on ads—not just growth, but the responsibility that comes with it.
  • Experienced in trust, safety, integrity, fraud, abuse prevention, or risk-sensitive domains (ads experience is a strong plus).
  • Excited to work closely with product, policy, and other XFN partners to solve ambiguous, high-stakes problems.
  • Able to take ownership, mentor others, and help set technical standards on a growing team.

Nice To Haves

  • Experience working with or training ML/LLM-powered systems for classification, moderation, or risk assessment.
  • Familiarity with advertiser ecosystems, ad marketplaces, or ad delivery pipelines.

Responsibilities

  • Design and build backend systems that ensure ads and advertisers meet safety, trust, and compliance standards.
  • Develop mechanisms to verify advertiser identity (“know your customer”) and assess advertiser risk.
  • Build and improve systems that evaluate ad content and landing pages for user safety and policy compliance.
  • Help determine where and how ads are shown, ensuring placements are appropriate, contextual, and aligned with user trust—especially in conversational AI surfaces like ChatGPT.
  • Partner cross-functionally with Core Ads, Integrity, and Personalization teams to balance growth goals with safety constraints.
  • Contribute to ML and LLM-adjacent systems that support automated decision-making, classification, and risk detection.
  • Shape technical direction, architecture, and best practices as the Ads Integrity organization scales.

Stand Out From the Crowd

Upload your resume and get instant feedback on how well it matches this job.

Upload and Match Resume

What This Job Offers

Job Type

Full-time

Career Level

Mid Level

Education Level

No Education Listed

Number of Employees

5,001-10,000 employees

© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service