The Trade Desk-posted about 1 month ago
Full-time • Mid Level
Bellevue, WA
1,001-5,000 employees
Computing Infrastructure Providers, Data Processing, Web Hosting, and Related Services

The Trade Desk is a global technology company with a mission to create a better, more open internet for everyone through principled, intelligent advertising. Handling over 1 trillion queries per day, our platform operates at an unprecedented scale. We have also built something even stronger and more valuable: an award-winning culture based on trust, ownership, empathy, and collaboration. We value the unique experiences and perspectives that each person brings to The Trade Desk, and we are committed to fostering inclusive spaces where everyone can bring their authentic selves to work every day. Do you have a passion for solving hard problems at scale? Are you eager to join a dynamic, globally- connected team where your contributions will make a meaningful difference in building a better media ecosystem? Come and see why Fortune magazine consistently ranks The Trade Desk among the best small- to medium-sized workplaces globally. What we do This specialized role is within the Technology operations group of The Trade Desk's Engineering. This group is focused on delivering world-class solutions for Enterprise needs within The Trade Desk. We are seeking a skilled and motivated Software Engineer II - Data Engineer to join our growing data team. In this mid-level role, you will be instrumental in developing, building, and maintaining the data pipelines and architecture that enable our organization to turn raw data into actionable insights. You will work on complex data problems, collaborate with cross-functional teams, and help ensure the reliability, efficiency, and quality of our data systems.

  • Data Pipeline Development: Design, build, and optimize scalable ETL/ELT pipelines for both batch and real-time data processing from disparate sources.
  • Infrastructure Management: Assist in the design and implementation of data storage solutions, including data warehouses and data lakes (e.g., Snowflake, S3, Spark), ensuring they are optimized for performance and cost efficiency.
  • Data Quality and Governance: Implement data quality checks, monitor data pipeline performance, and troubleshoot issues to ensure data accuracy, reliability, and security, adhering to compliance standards (e.g., GDPR, CCPA).
  • Collaboration: Work closely with product managers, data scientists, business intelligence analysts, and other software engineers to understand data requirements and deliver robust solutions.
  • Automation and Optimization: Automate data engineering workflows using orchestration tools (e.g., Apache Airflow, Dagster, Azure Data Factory) and implement internal process improvements for greater scalability.
  • Mentorship: Participate in code reviews and provide guidance or mentorship to junior team members on best practices and technical skills.
  • Documentation: Produce comprehensive and usable documentation for datasets, data models, and pipelines to ensure transparency and knowledge sharing across teams.
  • Bachelor's degree in computer science, information security, or a related field, or equivalent work experience. Masters degree preferred.
  • 4+ years of experience in a Data engineering role and have a broad understanding of Data Modeling, SQL, OLAP, and ETL required. Experience working with data pipelines including data modeling at petabyte scale is a bonus.
  • 4+ years of experience working with multiple database platforms by designing and implementing data and analytics solutions using technologies such as Snowflake, Databricks, Vertica, SQL Server, andMySQL required.
  • 4+ years of experience required in one or more programming languages,particularly SQL. Proficiency in the following programming languages also required: PL/SQL, Python, C#, Scala or Java.
  • Experience with workflow technologies like Spark, Airflow, Glue, Prefect or Dagster required
  • Experience with version control systems, specifically Git required.
  • Familiarity with DevOps best practices and automation of processes like building, configuration, deployment, documentation, testing, and monitoring.
  • Understanding BI and reporting platforms required, awareness of industry trends in the BI/reporting space, and how it can apply to an organization's product strategies.
  • Strong analytical and problem-solving skills with attention to detail.
  • Excellent communication and collaboration skills to work effectively with diverse teams and stakeholders.
  • A proactive and continuous learning mindset.
  • AdTech industry and its trends preferred.
  • Experience with containerization tools like Docker and Kubernetes preferred.
  • Familiarity with data modeling and data warehousing concepts preferred.
  • Basic understanding of machine learning concepts and how models work.
  • comprehensive healthcare (medical, dental, and vision) with premiums paid in full for employees and dependents
  • retirement benefits such as a 401k plan and company match
  • short and long-term disability coverage
  • basic life insurance
  • well-being benefits
  • reimbursement for certain tuition expenses
  • parental leave
  • sick time of 1 hour per 30 hours worked
  • vacation time for full-time employees up to 120 hours thru the first year and 160 hours thereafter
  • around 13 paid holidays per year
  • Employees can also purchase The Trade Desk stock at a discount through The Trade Desk's Employee Stock Purchase Plan.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service