DoReveal update:
Introducing Chat with Your Study: Deeper, Smarter Analysis.
< Back

Using the Critical Incident Technique in UX and Market Research

Published: 5 days ago, by Alok Jain


 The Critical Incident Technique (CIT) is a qualitative research method for capturing and analyzing memorable events (or "critical incidents") from people's experiences[1][2]. In a CIT study, participants are asked to recall specific instances where something significantly impacted an outcome, either positively or negatively. By collecting these detailed stories, researchers can uncover what drives success or failure in a product or service experience[3]. Originally developed in the 1950s by John Flanagan for aviation psychology (to study pilots' critical errors), CIT has since been applied across many fields - from healthcare and education to market research, product development, and user experience (UX)[4].
 

Why use CIT? For UX designers, market researchers, and product managers, CIT is valuable because it focuses on what really matters to users or customers. Instead of general opinions, you gather concrete examples of when a system or service delighted or frustrated someone. This article explains what CIT is, when to use it, how to conduct a CIT study, the key deliverables and visualizations it can produce, and provides examples for both digital and physical product contexts. While we'll emphasize digital products (like apps or websites), remember that CIT works for physical products and services as well - anywhere users have critical experiences worth analyzing.
 
An illustration of the Critical Incident Technique, highlighting how researchers collect specific memorable events from users and analyze their impact on outcomes.


 What is the Critical Incident Technique? 



 The Critical Incident Technique is a systematic qualitative method used to gather detailed observations of significant events or behaviors from people who experienced them firsthand[1]. In practice, a researcher asks participants to recall and describe a time when a particular behavior or event affected a desired outcome, either in a good or bad way[2]. Each story a participant shares is a "critical incident." The incident could be positive (for example, a feature that greatly helped them complete a task) or negative (an issue that caused a failure or frustration). What makes it critical is that the person believes that event directly influenced the outcome of their activity[3].

For an incident to be considered critical, it should be a specific episode with a clear cause-and-effect relationship to the outcome[5]. In other words, it's not just any minor detail, but something that clearly helped or hindered the person in accomplishing their goal. By focusing on these key moments, CIT digs into the underlying factors that drive success or failure in using a product, system, or service.

Key points about CIT:

  • Retrospective and story-based: Participants look back and tell a story about what happened, rather than just answering yes/no questions. This yields rich, contextual data in the person's own words[6][7]
  • Captures extremes: It intentionally seeks out notable highs and lows in user experience - the major pain points or delight moments - rather than average day-to-day use[8][9]
  • Participant-driven insights: Because users choose which incidents to share, the data highlights what they consider important. It reveals the features or events users feel strongly about (either as very helpful or very problematic)[9]
  • Flexible method: CIT can be done through different formats - commonly one-on-one interviews, but also via questionnaires, focus groups, or even diary studies[10]. The key is asking open-ended questions that prompt specific stories of use. 

 CIT was originally introduced in an aviation context (to figure out why pilots succeeded or failed in missions) and later formalized in 1954 by Flanagan[11]. Since then, it's become popular in human-computer interaction (HCI) and UX research because it helps gather many detailed incidents about how people use interfaces and systems[12]. Importantly, CIT is not limited to digital products - it's used in healthcare (e.g. to identify events affecting quality of care) and customer service settings as well[13][14]. Whether it's a website, a mobile app, a physical device, or a service interaction, CIT can shed light on the critical requirements and pain points from the user's perspective.

When to Use the Critical Incident Technique

Like any research method, CIT has situations where it shines and scenarios where it may not be the best fit. Here are some benefits of using CIT and instances when it is especially useful, followed by limitations to consider (which hint at when not to rely on CIT alone):
 


Benefits of using CIT


  • Uncovers important issues quickly: Because participants focus on significant positive or negative experiences, CIT tends to surface the most impactful usability issues or features rather than minor details. Big problems (or big wins) are more memorable and thus more likely to be reported[15]. This means you can quickly learn about major system flaws or standout benefits. 

  • Captures rare or long-term events: CIT allows people to draw from potentially years of experience with a product, including infrequent but critical events[16]. This is an advantage over short-term observations or lab tests, which might miss rare issues that only happen occasionally[17]. With CIT, a user can recall that one time last year when the app crashed and deleted their cart - a crucial insight that a brief test might never catch. 

  • Includes both success and failure cases: The technique explicitly invites both positive and negative incidents, giving a balanced view of what helps users succeed and what causes them to struggle[9]. By examining what went right in addition to what went wrong, teams can learn not only what to fix, but also what to keep or emphasize in the design (the strongest performing features or practices). 

  • Rich, user-centered data: CIT collects detailed narratives in the users' own words, providing context around human behavior and decision-making[18]. These stories often reveal why something was helpful or problematic. This depth of insight can uncover underlying user needs, motivations, or emotions that quantitative data might miss. 

  • Flexible and cross-industry: CIT can be applied in many domains wherever people interact with systems or services. It has been used in market research, UX, healthcare, education, customer service, and more[19]. It's also flexible in execution - you can do CIT interviews in person or remotely, or even adapt it into survey form. This makes it a versatile tool in a researcher's toolkit. 

 Limitations and when not to use CIT 


  • Relies on human memory: Because CIT is retrospective, it depends on participants accurately remembering past events. Human memory is fallible - details get forgotten or distorted over time[20][21]. Users are more likely to recall recent incidents, or those with strong emotions attached, while less obvious issues might be overlooked. If your study needs extremely precise or comprehensive data about usage, this recall bias is a limitation. 

  • Not representative of typical use: By design, CIT draws out the extremes (critical incidents), not the routine. So it won't tell you about everyday, average interactions or frequency of typical behaviors[8]. If your goal is to understand general usage patterns or minor usability annoyances, a standard usability test or analytics might be better. CIT is best when you care about what really stands out (good or bad) to users, even if those things are infrequent. 

  • Time and resource intensive: A thorough CIT study can require a lot of time for conducting in-depth interviews and analyzing many stories. It often yields a large volume of qualitative data that needs to be carefully coded and interpreted, which can be heavy on time and budget[22]. Other methods (like quick surveys or A/B tests) might deliver insights faster if you have a very tight timeline or limited resources. 

  • Requires researcher skill: Moderating CIT interviews and analyzing narrative data takes experience. Interviewers must probe effectively without leading the witness, and analysts must reliably code complex qualitative input. Inexperienced facilitators could introduce biases (through leading questions or interpretation errors)[23]. Thus, CIT is best used by or with trained qualitative researchers to ensure credible results. 

  • May need a working product: CIT generally works when participants have actually used the product or service in question in a real context. It's less useful if you only have a concept or wireframe that no one has experienced yet. Ideally, you have at least a prototype or live product so users can draw on real interactions[24]. (However, participants could potentially recall incidents from a similar product or prior version if needed.) 

Use CIT when you want deep insights into the critical moments of a user experience, especially for mature products or services with an existing user base. It's particularly helpful in evaluative or discovery research after users have accumulated experiences over time. On the other hand, if you need to study routine usage or if users have no prior experience to draw on, CIT might not be the appropriate method. Often, teams will combine CIT with other methods - for example, follow up a CIT study with direct observation or usability testing to see how common the issues are in practice[25]. Used in the right situations, CIT can provide unique, actionable insights about what truly matters to users.
 

 How to Conduct a Critical Incident Study (Step-by-Step) 


 Conducting a CIT study involves careful planning and execution to gather useful stories and interpret them correctly. Below are the typical steps for using the Critical Incident Technique in a UX or market research project, from preparation through analysis and action[26][27]:
 
  1. Define the focus and objectives: First, clarify what you want to learn. Identify the product, process, or experience you will study and the type of outcome you're interested in. For example, you might focus on incidents affecting a user's ability to complete an online purchase. Having clear research objectives and criteria for what counts as a "critical" incident will guide the rest of the study. 
  2. Recruit the right participants: Select participants who match your target users or customers and have relevant experience with the product/service. They should have used the system enough to recall specific events. Aim for a sample that represents your user population (e.g. a mix of backgrounds or usage levels) so you'll gather a variety of incidents. Tip: If your product has user personas defined, recruit people fitting those personas[28]
  3. Choose data collection methods: Decide how you will gather the incident accounts. CIT is commonly done through one-on-one interviews (in-person or remote) where the interviewer can ask follow-up questions. However, you can also use questionnaires or surveys with open-ended questions, group discussions or focus groups, or even online diaries for participants to log incidents[10][29]. Interviews are most flexible and allow probing for details, while surveys can collect more responses with less depth. Pick the approach that fits your timeline and resources. 
  4. Conduct the CIT sessions: When interacting with participants, clearly explain the focus (e.g. "We want to learn about times when using the mobile app really helped you, and times when it caused problems."). Give people a moment to remember an incident, then ask them to describe in detail what happened, what led up to it, and what the outcome was[30][31]. A common strategy is to start with positive incidents first (to put participants at ease with a constructive tone), then ask for negative incidents[32]. For each incident, use follow-up questions to elicit context: What were you trying to do?, Why was this incident so helpful or frustrating?, What did you do next? It's important to keep participants focused on specific examples rather than general opinions[33]. Collect at least a couple of incidents per person (one positive, one negative, or more if they recall additional cases). Ensure you record the sessions or take detailed notes, capturing the stories in the users' own words as much as possible. 
  5. Analyze the incident data: After gathering all the incidents, the next step is to make sense of them. Go through the transcripts or responses and code each incident - this means categorizing the events or their causes into meaningful groups (themes)[34]. For example, you might notice several incidents relate to "payment process issues" or "helpful customer support." You can write each incident on a card or sticky note and perform an affinity diagram (card sort) to group similar ones together[35]. Look for patterns, themes, and frequency of different incident types[34]. Which problems were mentioned by many users? Which positive factors recur often? Also pay attention to unique but serious incidents. The goal is to identify the critical themes - the key pain points and success factors emerging from the data. You may also quantify the data at this stage (e.g. count how many participants reported each theme) to get a sense of prevalence[36]
  6. Summarize and report the findings: The outcome of analysis is then compiled into a report or presentation for your stakeholders (designers, product managers, etc.). In this report, clearly describe the critical incident themes you found, and include illustrative examples or quotes from participants for each theme. For instance, one theme might be "Checkout failures," with a short narrative of a user's incident where the cart froze at payment time. Emphasize both the negative findings (problems to fix) and positive findings (strengths to build on). It's often useful to provide some prioritization - for example, which issues were most frequent or most severe. Visualizing the results can help here (more on that in the next section). Finally, where appropriate, offer potential solutions or recommendations alongside the findings[37]. For example, if many users reported being confused by a certain feature, recommend a usability redesign for that feature. 
  7. Act on the insights: A CIT study's value comes when the team uses the insights to improve the product or service. The final step is to collaborate with your UX designers, developers, or business team to develop strategies and solutions based on the critical incidents uncovered[27]. This could mean brainstorming design changes to address top pain points, creating new features to replicate positive incidents, or adjusting training and support materials. Essentially, you feed the learnings back into the product development cycle. In a sense, CIT not only identifies problems but also points to requirements - critical needs that the product must meet for users[38]

 By following these steps, you ensure that a CIT study is well-planned and yields actionable insights. Remember to pilot test your interview questions or survey beforehand to make sure participants understand them and that they indeed prompt the storytelling you need[39]. Also, maintain ethical practices: inform participants about confidentiality since they might share personal or sensitive anecdotes. When done right, a CIT study can be a powerful fact-finding mission that reveals where and why users have their most pivotal experiences with your product[40].
 


 Key Deliverables and Visualizations 


 After completing a Critical Incident Technique analysis, what deliverables can you expect to produce? Because CIT is qualitative, much of the output will be in the form of written insights and categorized data. However, there are also effective ways to visualize and share the results. Here are some key deliverables and how you might present them:

  • Catalog of Critical Incidents (by theme): A primary deliverable is a structured summary of the incidents collected, grouped into themes or categories. For example, you might present a list or table of the top 5–10 recurring themes that emerged (such as "Navigation confusion," "Checkout success," "Account setup issues," etc.), along with a description of each. Under each theme, include a few example incidents or verbatim quotes from users to bring it to life. This catalog essentially translates a heap of individual stories into generalized findings. You can also indicate how many users mentioned each category, to give a sense of its frequency or importance (e.g. "8 out of 15 users had a payment-related critical incident")[36]. It's even possible to make quantified statements like "53% of participants found Feature X helpful in a critical moment" based on your data[41]. These help stakeholders grasp the significance of each issue or success factor. 

  • Visual charts or graphs: To complement the written summary, researchers often create simple visuals. A common visualization is a bar chart or pie chart showing the distribution of incident categories. For instance, a bar chart might display the number of critical incidents falling into each theme (e.g. 10 incidents about navigation, 7 about performance, 5 about content, etc.). This provides a quick overview of which areas generated the most critical moments. You could also visualize the positive vs. negative split - for example, a pie chart of what proportion of incidents were positive experiences versus negative. Such charts make the data more accessible to stakeholders who prefer visuals, and they emphasize the relative weight of issues discovered[36]. (Keep in mind, as noted earlier, frequency isn't everything - a rarely mentioned incident can still be very important[42]. So use visuals as support, but explain the context of each issue as well.) 

  • User journey or timeline annotations: If the critical incidents relate to a process or journey (like an e-commerce purchase flow or a user onboarding sequence), it can be helpful to plot them along that user journey map. For example, you might take a typical customer journey and mark points where positive incidents occur (perhaps with a 👍 icon) and where negative incidents occur (👎 icon). This kind of visualization shows at which stage users are hitting critical snags or delightful moments. It highlights touchpoints that need attention. Even for physical services, a journey map (or service blueprint) annotated with critical incidents can guide improvements at specific steps. This deliverable is essentially a way to contextualize the incidents in the larger flow of user experience. 

  • Personas or segment-based insights: In some studies, you may find different types of users have different critical incidents. It could be useful to break out findings by user segment. For instance, new users versus power users might report distinct issues. In your deliverables, you might create a short persona-based summary: "For Persona A (novice user): critical incidents often involved setup and onboarding issues... For Persona B (experienced user): critical incidents were more about advanced features misbehaving..." This isn't always applicable, but if you see patterns by user type, it's worth highlighting. It helps stakeholders understand if certain problems affect a particular group of users. 

  • Narrative case studies or scenarios: Because CIT yields rich stories, another engaging deliverable is writing a few illustrative user stories as case studies. You can select one or two compelling critical incident examples and write them up as a narrative (a short paragraph each) describing the user, the situation, and what happened. For example: "A customer was trying to buy a gift on our app (Incident: Positive). She accidentally ordered the wrong size, but the easy exchange process turned it into a great experience. She described how the app's chat support immediately fixed the issue, making her "trust the brand more."" Such stories put a human face on the data and can be very memorable for the team. These narratives can also be used in design brainstorming or training, almost like mini case studies. In fact, CIT data can feed into creating use-case scenarios for design: designers can take real incidents and ask, "how might we ensure this success happens more often?" or "how can we prevent this failure?"[43][44]

  • List of prioritized issues and recommendations: Finally, a very practical deliverable (especially for product management) is a list of key issues to address, derived from the critical incidents. This might look like a classic action log or a table, where each critical issue is listed alongside its recommended solution or next step. For example: Issue: Users often get locked out of account (critical incident: password reset email not received). Recommendation: Implement SMS verification as backup, and improve email deliverability. The list can be ordered by priority (considering how many users it affected and how severe the impact was). This is essentially a problem backlog coming straight from user experiences[44]. It ensures the CIT study leads directly into improvements. Likewise, make a note of the positive findings - e.g. "Feature X is a fan favorite in critical moments - consider highlighting it in marketing or onboarding to more users.

 When preparing your report or presentation, use visual aids wherever appropriate. Besides charts and journey maps, you might include icons, highlight quotes in callout boxes, or even short video/audio clips of users describing an incident (if you have consent). The goal is to communicate the critical incidents clearly and compellingly. Because CIT data is story-driven, sharing even one or two user quotes or stories in the user's own voice can leave a strong impression on your audience.
 


 Examples of CIT in Practice: Peloton Bike Study Using DoReveal


A recent study about the use of the Peloton bike, conducted with the DoReveal platform, illustrates how CIT data can be structured and analyzed. 

 Participants described both positive and negative critical incidents around their purchase and use of Peloton. These incidents were systematically coded into triggers, actions, outcomes, and emotional impacts. Here are a few examples: 

  • Trigger/Antecedent: Building gym closed during COVID 
    • Behavior/Action: Participant looked for alternatives 
      • Outcome: Loss of gym access, exploration of home fitness 
        • Emotion: Frustration, urgency 
          • Theme: External disruption (Negative, high impact). 

  • Trigger/Antecedent: Social media posts from friends 
    • Behavior/Action: Participant inspired to try Peloton 
      • Outcome: Purchase decision, increased motivation 
        • Emotion: Encouraged, inspired 
          • Theme: Social influence (Positive, high impact). 

  • Trigger/Antecedent: Perceived high Peloton price 
    • Behavior/Action: Rationalized cost by comparing to canceled travel 
      • Outcome: Purchase despite initial resistance 
        • Emotion: Anxiety shifting to relief 
          • Theme: Financial decision (Mixed outcome, medium impact). 

 By coding incidents in this structured way, the research team could visualize key themes (e.g., external disruptions, home environment constraints, social influence, household dynamics, financial decision-making). They also layered in polarity (positive/negative/mixed) and impact ratings to help prioritize which incidents were most influential.



Here are the detailed incidents identified for this study






The Critical Incident Technique is a proven method for digging into the key moments of user or customer experiences. By systematically collecting stories of when things went great or went wrong, CIT helps researchers and teams understand the critical requirements for a product or service to succeed[45]. We've discussed how CIT works, when it's appropriate, and how to carry out a study. When used thoughtfully, CIT can reveal deep insights — it uncovers what truly matters to users over the long haul, including needs or problems that might not surface through regular observation or surveys.



However, CIT is not a one-size-fits-all solution. Always consider the research goal and possibly pair CIT with other methods for a fuller picture[46]. For example, you might use CIT to identify major pain points, then do a usability test to see those pain points in action, or use surveys to verify how common they are. Likewise, remember the limitations around memory and bias; structure your questions carefully so as not to lead participants, and interpret the findings with an understanding of their context.


In the end, CIT's strength lies in its rich, human-centered data. Especially for UX designers, market researchers, and product managers, these authentic user anecdotes are gold. They not only highlight issues but often suggest why the issues occur and how they affect real people. Armed with this knowledge, you can make more informed decisions to improve your digital or physical product. Whether it's refining a user interface or enhancing a customer service process, the insights from a Critical Incident Technique study can guide you to create better experiences that address the moments that matter most to your users.