Skip to content

UX Researcher Interview Questions

Prepare for your UX Researcher interview with common questions and expert sample answers.

UX Researcher Interview Questions & Answers

Preparing for a UX Researcher interview means getting ready to discuss your methodology, your passion for understanding users, and your ability to translate research into actionable insights. Whether this is your first UX research role or you’re advancing your career, this guide will help you navigate the most common interview questions you’ll encounter and craft responses that showcase your expertise.

UX Researcher interviews are unique—they blend behavioral storytelling with technical depth. Interviewers want to understand not just what you’ve researched, but how you think about problems, collaborate with teams, and drive impact. Let’s walk through the types of questions you’ll face and how to answer them with confidence.

Common UX Researcher Interview Questions

What is your approach to planning and executing a research study?

Why they ask: This question reveals your process, rigor, and ability to think strategically about research design. Interviewers want to know if you’re methodical and if you align research with business goals.

Sample answer: “I start by aligning with stakeholders on the research objectives—what are we trying to learn and why? Then I define the research questions and hypotheses. From there, I choose the methodology that best fits our timeline and resources. For example, in my last role, we needed to understand why users were abandoning a sign-up flow. I conducted five user interviews to uncover qualitative insights about pain points, then followed up with a survey of 200 users to validate those findings at scale. I analyzed the data, identified three major friction points, and synthesized them into actionable recommendations for the product team. Within two months, they implemented changes based on the research, and sign-up completion rates increased by 28%.”

Tip for personalizing: Walk through a specific project from your portfolio. Be concrete about the timeline, the number of participants, and the actual impact. If you don’t have a direct impact number, that’s okay—discuss what the team learned or decided based on your findings.


How do you choose between qualitative and quantitative research methods?

Why they ask: This tests your methodological thinking and your ability to match research tools to research questions. It shows whether you understand the strengths and limitations of different approaches.

Sample answer: “It depends on what stage we’re in and what questions we’re trying to answer. Early in product discovery, I lean toward qualitative methods like user interviews or contextual inquiry—I want to understand why users behave a certain way, not just if they do. I’ll typically do 5-8 interviews to uncover patterns and themes. As we move into validation or iteration, I shift toward quantitative methods. For instance, once we’d identified key pain points through interviews, I ran A/B tests and surveys with 300+ participants to see which solution resonated most broadly. I also combine them when I can. In one project, I used card sorting to understand how users mentally organize information, then followed up with tree testing on 500 users to validate the new information architecture. That combination gave us both the ‘why’ and the ‘at scale’ confidence.”

Tip for personalizing: Share an example where you used both methods in sequence. Explain what you learned from each and how they informed each other. This shows sophisticated thinking about research design.


Tell me about a time your research findings were not what stakeholders expected.

Why they ask: Interviewers want to see how you handle resistance, whether you stand by your research, and if you can communicate findings persuasively even when they’re inconvenient.

Sample answer: “We were designing a new feature that product leadership was convinced would be a huge hit. But when I conducted usability testing with our target users, the concept fell flat—users found it confusing and couldn’t articulate the value. It wasn’t what anyone wanted to hear. Rather than just deliver the bad news, I prepared a presentation that walked through the user feedback in detail, showed video clips of users struggling, and dug into why the feature missed the mark. Then I pivoted to suggest a simpler approach based on what users actually said they wanted. I also offered to run another round of testing on the revised concept. The team appreciated the honesty and the constructive next steps. We ended up launching a different version of the feature that was much more successful.”

Tip for personalizing: Focus on the emotional intelligence in your response. Show that you understand the stakeholder’s perspective, that you delivered bad news respectfully, and that you offered a path forward. This matters as much as being right.


How do you ensure your research is representative and unbiased?

Why they ask: This reveals your awareness of research ethics and your commitment to inclusive design. It shows whether you think critically about who you’re recruiting and potential blind spots.

Sample answer: “Bias can creep in at every stage, so I’m intentional about several things. First, in recruitment: I make sure I’m not just talking to users who are already power users or early adopters. I actively recruit for diversity across demographics, abilities, and tech comfort levels. On a recent project, I worked with a recruiting partner who had access to a panel of people with disabilities, which surfaced insights we would’ve completely missed otherwise. Second, I design unbiased materials—I test my survey questions and interview guides with colleagues to flag leading questions or assumptions. And third, I’m aware of my own biases during analysis. I’ll often code data independently and compare notes with a colleague to make sure I’m not cherry-picking quotes that confirm what I already believed. It’s not perfect, but it keeps me honest.”

Tip for personalizing: Give a concrete example of where bias could have happened and what you did to prevent it. Maybe you caught a leading question in a survey or realized your participant pool wasn’t diverse. Show the thought process.


How do you communicate research findings to a non-research audience?

Why they ask: This tests your ability to translate complexity into clarity and influence non-researchers. It’s as important as conducting good research.

Sample answer: “I meet people where they are. Engineers care about technical implications, designers care about design patterns, and executives care about business impact. So I tailor the story. For a checkout flow project, I might start with engineers by saying, ‘Here’s where the form validation is confusing users,’ and show specific interaction recordings. With designers, I’d focus on the mental model: ‘Users expect the payment and shipping address to be separate steps.’ And with the exec team, I’d lead with the business impact: ‘Users are abandoning at this step 23% of the time. If we fix it, we could recover $500K annually.’ I also always use visuals—video clips from user testing are so much more compelling than a spreadsheet of findings. And I keep decks short, usually one key insight per slide.”

Tip for personalizing: Mention a specific format you’ve used successfully—maybe it’s a highlight reel of video clips, an interactive dashboard, or even just a conversation over coffee. The medium matters.


What research tools and software are you proficient in?

Why they ask: They want to know if you can hit the ground running and if you’re comfortable with their existing toolstack. Also testing your willingness to learn new tools.

Sample answer: “I’m comfortable with qualitative analysis tools like NVivo and Dovetail for coding and identifying themes. On the quantitative side, I’ve used Qualtrics for survey design, Google Analytics for behavioral data, and Mixpanel for product analytics. I’ve also done moderated and unmoderated usability testing in Maze, UserTesting, and Validately. That said, I’m tool-agnostic—the tool serves the research, not the other way around. I learn quickly. My last role switched to a new platform mid-project, and I was able to migrate our analysis and stay on timeline. What matters more is understanding the strengths and limitations of each tool and choosing the right one for the problem.”

Tip for personalizing: Be honest about what you know and what you don’t. It’s fine to say, ‘I haven’t used X, but I’ve used similar tools and I’m confident I’d pick it up quickly.’ Mention tools they use if you can infer them from the job description.


Describe a research project where you had to work with a very tight timeline or budget constraint.

Why they asks: They want to see your resourcefulness, prioritization skills, and how you maintain rigor under pressure.

Sample answer: “We had two weeks to validate a new concept before a investor pitch, and the budget was minimal. I couldn’t do a full usability study with 20 participants like I would have liked. Instead, I ran three short user interviews, then recruited 50 participants for a rapid online survey using a tool we already had. I focused the questions on the core value proposition and didn’t try to test everything—I had to pick the two or three things that mattered most. The compressed timeline actually forced clarity. We got directional insights fast: users loved the core concept but were confused by one specific feature. That finding went straight into the pitch and ended up being a key talking point with investors. It taught me that perfect research isn’t always necessary—sometimes directional research done quickly is exactly what stakeholders need.”

Tip for personalizing: Show that you can prioritize ruthlessly and make trade-offs consciously. The interviewer isn’t judging you for small sample sizes; they’re judging whether you’re smart about it.


Why they ask: This reveals your curiosity, your commitment to professional growth, and whether you’re thinking about the broader field beyond your own work.

Sample answer: “I listen to a couple of podcasts—I really like the UserTesting podcast and the Reframer podcast from dscout. I’m also part of a local UX research meetup where we discuss case studies and new methodologies. Recently, the group introduced me to mixed methods research, which I ended up applying to a project. I read articles on Medium and follow researchers I respect on LinkedIn. And honestly, I learn a lot from my peers—I’ll often grab coffee with another researcher to talk through a tricky analysis problem or get feedback on a study design. I think the best learning happens through doing and through talking to others doing the same work.”

Tip for personalizing: Name specific resources you actually use, not just generic “I read industry blogs.” Mention one thing you learned recently and how it changed your approach.


Walk me through how you’d approach researching a product you’re not familiar with.

Why they ask: This is testing your curiosity, learning agility, and how you set up research projects from scratch. No “right answer” here—they want to see your thought process.

Sample answer: “First, I’d spend time using the product myself. I’d walk through the primary user flows and notice where I get stuck or confused—that gives me a gut sense. Then I’d talk to stakeholders: what are their questions? What’s the business context? Who are the users? Then I’d do a competitive analysis to understand how similar products approach the problem. From there, I’d synthesize what I know and what I’m curious about, and I’d draft a research plan. For a new product I knew nothing about, I might start with exploratory interviews with 5-8 target users just to understand their current workflows and pain points. Then I’d make a decision about what to study deeper based on what I’m hearing. If there’s a specific feature or flow that seems broken, I might do usability testing. If I’m trying to understand market fit, I might do concept testing. The process is: understand context, get curious, talk to users, and let that inform what I study next.”

Tip for personalizing: Show your framework without being rigid. Emphasize that you’d adapt based on what you learn along the way.


Tell me about a research project where you had to make a difficult methodological trade-off.

Why they ask: This tests your critical thinking and your ability to make pragmatic decisions when perfection isn’t possible.

Sample answer: “We were researching a feature that would mainly be used on mobile, but we had a small budget and tight timeline. The ‘perfect’ approach would have been to test on real devices in context, maybe doing in-home ethnography with mobile users. But that wasn’t realistic. So I made a trade-off: I ran unmoderated usability testing on a mobile simulator, which let us watch users navigate the interface and hear their thinking-out-loud feedback. It wasn’t as rich as in-context research, but it was fast and affordable. The trade-off I accepted was that we didn’t see how the feature fit into real, messy mobile workflows. To compensate, I added a short survey asking users about their actual use context and pain points. It wasn’t perfect, but it gave us 80% of what we needed in the time and budget we had. The product team was able to move forward with confidence.”

Tip for personalizing: Be realistic about constraints. Show that you understand what you’re gaining and losing with each trade-off, not that you pretend it doesn’t exist.


How do you handle disagreement between research findings and designer intuition?

Why they ask: This tests your collaborative approach and whether you’re dogmatic about research or flexible and strategic.

Sample answer: “I’ve been in this situation a few times. My instinct is never to dismiss the designer’s intuition—they often know things about the product and users that I might not. So I start with curiosity. I’ll ask, ‘Tell me more about why you think that’ and really listen. Then I’ll look at the data again and ask myself, ‘Could both be true? Maybe the research is pointing to one insight and the designer is seeing a different problem.’ Often there’s no real conflict; we’re just looking at different angles. If there genuinely is a conflict, I’ll propose a way to test both approaches. I might say, ‘Your approach is faster to implement. Let’s build it, launch it to a small cohort, and measure the impact. If it doesn’t hit our metrics, we try the approach the research suggests.’ This moves from debate to experimentation, which everyone appreciates.”

Tip for personalizing: Show respect for design thinking and intuition. You’re a partner, not an adversary. Highlight times you’ve actually learned from designers.


How do you prioritize what research to do when there are many competing stakeholder needs?

Why they ask: This tests your strategic thinking, your ability to say no diplomatically, and your business acumen.

Sample answer: “I align on criteria with leadership first: What will have the biggest impact on our goals? What’s the timeline? What’s the cost and effort? Then I look at dependencies—is there research that needs to happen before other research can be useful? I also think about team capacity and whether it makes sense to do everything or to prioritize and defer some things. In one role, we had requests for research on three different areas, but my gut said one of them would have way more impact on our metrics. I presented the criteria to the leadership team and showed how each request scored. The team agreed on the top priority, and I offered to do a quick competitive analysis on the other areas in the meantime. It helped people feel heard even though we couldn’t do everything.”

Tip for personalizing: Show that you think about impact, not just volume. You understand that doing fewer things well is better than doing many things poorly.


Why they ask: Interviewers want to see if you’re a genuine user advocate, even when it’s uncomfortable. This tests your conviction and communication skills.

Sample answer: “We were building a feature that the exec team was really excited about, but my research showed that a large segment of our users—older adults who weren’t tech-savvy—would struggle with it. The feature was complex, had a steep learning curve, and would leave some users behind. I raised this in meetings and wasn’t initially taken seriously. So I prepared a more formal case: I showed video clips of older users getting frustrated, I quantified how many users would be affected, and I proposed a simpler alternative that would be more inclusive. I also offered to do some testing on the simplified version. It wasn’t that I was against the feature; I was for a version that wouldn’t exclude people. The team ultimately agreed to strip down the feature and add it back in phases, which made it more accessible. It took persistence and framing it as a business opportunity, not just a moral stance.”

Tip for personalizing: Show your conviction without being preachy. Focus on the user impact and the business case for inclusion. Persistence and evidence matter.


What would you do if you found a critical usability issue late in the product development cycle?

Why they ask: This tests your judgment, your ability to prioritize severity, and how you handle pressure and difficult conversations.

Sample answer: “First, I’d assess how critical it really is. ‘Critical’ means it stops users from completing a core task or significantly damages the user experience. I’d gather the data: How many users are affected? How severe is the impact? Is it affecting a core user journey or an edge case? Then I’d present it to the team clearly: ‘Here’s what we found, here’s the impact, here are options.’ One of those options might be launching with a workaround, another might be a delayed launch. But I wouldn’t bury it or downplay it just because it’s late in the cycle. I’d present the facts and let the team make an informed decision. In one case, we found a critical error message that was confusing users right before they’d click ‘purchase.’ We were literally a week from launch. We flagged it, and the team made a decision to extend the launch by a few days to fix it. Worth it.”

Tip for personalizing: Show maturity and professionalism. You’re not the bad guy for finding problems; you’re the person who prevents worse problems.

Behavioral Interview Questions for UX Researchers

Behavioral questions are designed to uncover how you actually behave under pressure, how you work with others, and how you’ve handled challenges in real situations. The STAR method (Situation, Task, Action, Result) is your friend here. Set the scene, explain what you needed to accomplish, walk through what you actually did, and finish with what happened.

Tell me about a research project where you discovered something unexpected.

Why they ask: This reveals your curiosity, your ability to adapt, and how you handle surprises.

STAR framework guidance:

  • Situation: Describe the project setup and what you expected to find.
  • Task: Explain what the research was intended to uncover.
  • Action: Walk through the moment you noticed something unexpected. Did you dig deeper? Did you change your approach? Show your adaptability.
  • Result: What did you learn? How did it change the direction? What was the impact?

Sample answer: “We were researching checkout flow optimization and expected the main issue to be around payment information entry. But as I watched users, I noticed they were getting stuck way earlier—at the shipping address step. They weren’t sure if the shipping address had to match their billing address. I hadn’t been looking for that, but I saw it across four of five users. So I pivoted. I added questions to my interview guide about address expectations, and in the survey, I specifically asked about this confusion point. It turned out 34% of users had this same question. That unexpected finding became the top recommendation, and when the team implemented a simple clarifying label, checkout conversion improved by 18%. If I’d just stuck to my original hypothesis, we’d have missed the real problem.”

Tip for personalizing: Show genuine curiosity and flexibility. Unexpected findings make for better stories than “everything went as planned.”


Describe a time you had to communicate complex research findings to a skeptical audience.

Why they ask: This tests your communication skills, your ability to handle pushback, and your confidence in your work.

STAR framework guidance:

  • Situation: What were you presenting? Who was the audience and why were they skeptical?
  • Task: What was your goal? Get them to believe the findings? Get them to take action?
  • Action: How did you structure your presentation? What specific techniques did you use? Did you use data, video, analogies?
  • Result: Did they come around? What changed?

Sample answer: “I presented findings on a product redesign to an engineering team that was skeptical about the value of UX research. They thought designers just did what they wanted anyway. So I led with a metric that mattered to them: we were losing users to competitors because the interface was confusing. Then I showed side-by-side video clips—here’s how a user currently struggles, here’s how they navigate the new design. I only showed the most compelling two or three clips, not all of them. I also prepared for pushback by doing the analysis myself alongside their usual metrics tools, so I could speak their language. By the end, one of the senior engineers said, ‘Oh, I see the problem now.’ They went from skeptical to actually interested in collaborating on the solution.”

Tip for personalizing: Remember that skepticism isn’t personal—it’s usually just unfamiliarity. Show how you met them where they were.


Tell me about a time you had to collaborate with a team member who had a very different working style than yours.

Why they ask: UX Researchers don’t work in a vacuum. This tests your flexibility, empathy, and ability to find common ground.

STAR framework guidance:

  • Situation: Who was this person? What was their working style? What was the project?
  • Task: What was the challenge or conflict?
  • Action: What did you do to bridge the gap? Did you adapt? Did you have a conversation?
  • Result: How did the relationship improve? What was the outcome of the project?

Sample answer: “I worked with a product manager who was very fast-moving and wanted quick, gut-check research. I’m more methodical and wanted to do things right. Early on, there was friction—I’d say, ‘We need a full research plan,’ and they’d say, ‘We need answers by Friday.’ Instead of fighting about it, I asked them why speed mattered so much. They said they were under pressure to show progress to investors. So I shifted my approach: for some questions, I did rapid testing. For others where we had more time, I did deeper work. We ended up with a rhythm that worked for both of us. And honestly, their urgency pushed me to be more efficient without sacrificing rigor. By the end of the project, they were actually asking me to do more research because they saw how it improved their decisions.”

Tip for personalizing: Show that you adapted without compromising your values. Highlight what you learned from the other person’s style.


Describe a research project that failed or didn’t go as planned.

Why they ask: Interviewers know things don’t always work perfectly. They want to see how you respond to failure and what you learned.

STAR framework guidance:

  • Situation: What was the project? What was supposed to happen?
  • Task: What went wrong? Was it a methodology issue, a recruitment problem, or something else?
  • Action: How did you respond? Did you fix it mid-project? Did you own the mistake?
  • Result: What did you learn? How did you do things differently next time?

Sample answer: “I ran a study where I recruited participants through a general panel, and I got really skewed results—they weren’t representative of our actual users. I was about halfway through and noticed the participants didn’t match our user personas in key ways. My first instinct was to just keep going, but I knew that would be a waste of time. I owned it with my stakeholders: ‘The recruitment approach didn’t work. I’m going to pause, recruit differently, and start over.’ We worked with a more targeted recruiting partner, and the second round of data was actually useful. It cost us two weeks and some money, but it was worth it. I learned to spend more time vetting recruiting partners and doing a test recruitment before starting the full study. It was a frustrating experience, but it made me better.”

Tip for personalizing: Don’t make excuses. Show that you take responsibility, learn, and change your approach next time.


Tell me about a time your research directly influenced a product decision.

Why they ask: They want to know that your work has impact and that you can tell a compelling story about it.

STAR framework guidance:

  • Situation: What was the research question?
  • Task: What was at stake? What decision were you informing?
  • Action: Walk through the research and how you presented findings.
  • Result: What decision was made? What was the business or user impact?

Sample answer: “We were deciding between two different approaches to onboarding. The product team had strong opinions—about 50/50 split on which was better. I ran a unmoderated study with 40 new users, asking them to go through onboarding and then answer questions about their experience. One approach had a 78% completion rate; the other had a 61% completion rate. More importantly, in the free-response feedback, users on the second approach said they felt overwhelmed. I presented the data, showed some video clips of users navigating each flow, and made a clear recommendation. The team went with the higher-performing option. Six months later, they told me that new user activation had improved by 23% compared to the previous year. They attributed it partly to the onboarding change we’d tested.”

Tip for personalizing: Connect the research to a concrete decision and a measurable outcome. If you don’t have perfect numbers, that’s okay—explain what the team decided and why your research mattered to that decision.


Describe a time you had to push back on a research request because it wasn’t feasible or wouldn’t answer the right question.

Why they ask: This shows good judgment, the ability to say no diplomatically, and your focus on doing research that actually matters.

STAR framework guidance:

  • Situation: What was the request? Who was it from?
  • Task: Why was it not feasible or not the right research question?
  • Action: How did you push back? Did you offer an alternative?
  • Result: How did it get resolved?

Sample answer: “A stakeholder asked me to survey ‘all our users’ about their satisfaction with a new feature—very broad, very vague. I asked clarifying questions and realized they actually needed to understand why some users liked the feature and others didn’t. A broad satisfaction survey wouldn’t tell them that. I suggested a different approach: I’d do a few in-depth interviews with high-satisfaction and low-satisfaction users to understand the differences, then design a targeted survey based on what I learned. That would give them the insights they needed and wouldn’t waste time surveying people about generic satisfaction. They were actually relieved—they just didn’t know how to ask for what they needed. We did the research my way, and they got much more actionable insights.”

Tip for personalizing: Show that you understand the underlying need, even when the request isn’t perfectly framed. You’re solving the real problem, not just doing what you’re asked.

Technical Interview Questions for UX Researchers

Technical questions dig into your methodology expertise and your ability to think through research design problems. For these, focus on your reasoning and your framework, not just the “right answer.”

How would you design a research study to understand why users are abandoning a mobile app?

Why they ask: This is a realistic scenario that tests your ability to design a study from scratch. They want to see your process.

Framework for answering:

  1. Start with clarifying questions: How long have they been seeing this? Is it abandonment after first use, or after several sessions? How many users are affected?
  2. Form a hypothesis: “Abandonment could be due to onboarding friction, confusing navigation, lack of value, or technical issues. I’d want to narrow down which.”
  3. Propose mixed methods: “I’d start with qualitative interviews with 5-8 recently churned users to understand their reasons. Then I’d design a survey for a larger cohort to validate which reasons are most common. I might also look at behavioral data—where do users drop off? How long are they in the app?”
  4. Consider logistics: Recruitment (where will you find churned users?), timing, sample size, and timeline.
  5. Explain what you’d measure: Completion rates, sentiment, self-reported reasons, and behavioral patterns.
  6. Close with next steps: “Based on what I find, we’d either dig deeper into specific issues or test solutions.”

Sample answer: “First, I’d clarify: are we talking about users who tried the app once and never came back, or users with multiple sessions? That changes my approach. Let’s say it’s first-time users. I’d start qualitatively—I’d recruit 6-8 users who downloaded the app within the last month but haven’t opened it in two weeks. I’d do phone interviews asking them to recall their first experience and what made them leave. I’d be listening for patterns: ‘The onboarding was confusing,’ ‘It didn’t work,’ ‘I didn’t understand the value,’ etc. From those interviews, I’d develop a survey to send to a larger group of churned users—maybe 200-300—asking them to rate how important each of those barriers was. I’d also look at Firebase or similar analytics to see where users actually drop off in the onboarding flow. Once I have all that data, I’d synthesize it into a recommendation: ‘The top reason people abandon is X, followed by Y. I recommend we start by testing a fix for X.’”

Tip: Walk through your thinking out loud. Show that you’d ask clarifying questions, combine methods, and let the findings guide the next steps.


How would you measure the success of a UX research initiative?

Why they ask: This tests whether you think about research impact and how you connect research to business outcomes.

Framework for answering:

  1. Distinguish between types of success: Direct impact (feature launched based on research, users completed tasks faster), indirect impact (team now does more user testing, designers ask better questions), and process improvements (research infrastructure, tools, documentation).
  2. Choose measurable KPIs: “Success might look like 80% of product decisions in Q3 were informed by research” or “Research recommendations led to a 15% improvement in the key metric we were optimizing for.”
  3. Acknowledge short vs. long-term: “Some impact shows up immediately—we tested a redesign and it hit our conversion goal. Other impact is slower—we’re shifting the culture toward user-centered thinking.”
  4. Be realistic: “Not every research project will have direct quantifiable impact. Some research kills ideas early, which is success—we’re not wasting engineering time on something users don’t want.”

Sample answer: “I’d measure success on a few levels. First, direct impact: Did the research inform a decision? Did it lead to changes that moved our key metrics? In my last role, I researched checkout barriers, we implemented changes based on the research, and conversion improved by 18%. That’s clear success. Second, adoption: How much is the team actually using research? Are they asking for it? Are they consulting findings when making decisions? I’d track that quarterly—how many research requests did we get, how many insights made it into product specs? Third, culture: Are we getting better at asking user-centered questions? Are teams more curious about user behavior? That’s softer, but it’s real. I’d measure it through retrospectives and feedback from cross-functional partners. Finally, efficiency: Can we run research faster and cheaper while maintaining rigor? If we’ve built good templates and found the right tools, research should become more sustainable.”

Tip: Show that you think about different types of impact, not just metrics. Demonstrate that you’re aware that not everything can be quantified.


Walk me through how you’d analyze data from a usability study with 12 participants.

Why they ask: This tests your data analysis process and your ability to synthesize qualitative information into actionable insights.

Framework for answering:

  1. Preparation: “I’d watch all 12 sessions, either live or recorded. I’d take notes on each one, flagging moments where users struggled, got confused, or expressed frustration.”
  2. Coding process: “I’d then do a pass through all the notes and code for themes. What issues came up repeatedly? I’d use a simple spreadsheet—rows are issues, columns are participants, and I mark which issues each person encountered. This gives me frequency data.”
  3. Pattern identification: “Once I’ve coded everything, I’d look for patterns. Issues that all 12 people hit are critical. Issues that 2-3 people hit might be edge cases. Issues where 7-8 people struggle are significant. I’d rank them by frequency and severity.”
  4. Video clips: “I’d pull 2-3 short video clips that show each top issue in action. These are way more compelling in a presentation than me just saying, ‘Users couldn’t find the button.’”
  5. Recommendation: “From the patterns, I’d synthesize specific, actionable recommendations. Not just ‘users are confused’ but ‘users are confused about whether the button is clickable. The solution: add visual affordances like a shadow or highlight when hovering.’”

Sample answer: “I’d start by going through all 12 sessions and taking notes on key moments—where did users struggle? I’d mark those in real-time or shortly after watching. Then I’d create a simple analysis doc: I’d list the top issues I noticed, and for each issue, I’d go through and mark which participants encountered it. This gives me a frequency count. With 12 people, if an issue shows up in 8-12 sessions, it’s a significant problem. If it’s 3-4 people, it might be an edge case. I’d also note severity—did it stop them from completing the task, or was it just a minor annoyance? Once I’ve done that, I’d write up a findings document that walks through the top 3-4 issues, includes video clips of the issue happening, and proposes specific fixes. The video is key—it’s so much more compelling than me describing the problem.”

Tip: Show that your analysis is systematic and that you can synthesize patterns from a small sample. Emphasize video clips and specificity in recommendations.


What’s your approach to recruiting and screening research participants?

Why they ask: Recruiting is often where research goes wrong. They want to know if you’re thoughtful about getting representative, valid participants.

Framework for answering:

  1. Define your user: “First, I’d clearly define who the target user is. Age, tech comfort level, frequency of use, pain points—whatever’s relevant to the research question.”
  2. Identify recruitment channels: “Where do these users hang out? Are they in a panel? On social media? Do we have customer lists? Different channels have pros and cons—panels can be expensive but targeted; social media can be cheaper but noisier.”
  3. Screen questions: “I’d create a screener that qualifies people. For an app study, I might ask, ‘Do you use this app at least weekly?’ If the answer is no, they’re not the right person. I’d also look for diversity across demographic factors relevant to the product.”
  4. Quality checks: “I’d ask questions to ensure participants understand what they’re signing up for and that they’re legitimate. Some recruiters include check-in questions—‘Tell us what you think this study is about?’”
  5. Incentive alignment: “I’d think about incentives. Too high and you attract professional respondents who aren’t real users. Too low and you get people who rush through. It should be fair but not so high that it skews who’s interested.”

Sample answer: “Recruitment can make or break research, so I’m careful about it.

Build your UX Researcher resume

Teal's AI Resume Builder tailors your resume to UX Researcher job descriptions — highlighting the right skills, keywords, and experience.

Try the AI Resume Builder — Free

Find UX Researcher Jobs

Explore the newest UX Researcher roles across industries, career levels, salary ranges, and more.

See UX Researcher Jobs

Start Your UX Researcher Career with Teal

Join Teal for Free

Join our community of 150,000+ members and get tailored career guidance and support from us at every step.