We ran the same research through Reveal and other popular tools. Then, we asked ChatGPT to compare the outputs-without knowing which tool produced which result. It was a blind taste test for qualitative research and Reveal naturally came out on top.
This research was conducted to understand how healthcare workers experienced and responded to the dynamic nature of the COVID-19 pandemic. The study focused on exploring topics such as evolving workplace protocols (including PPE usage), emotional and psychological impacts, communication issues, and the influence of misinformation and structural shortcomings. Data was gathered from a diverse group of healthcare professionals, offering insights into both frontline experiences and broader organizational responses.
You can access the transcript files here. We welcome you conduct your own assessment.
Repetition Across Research Questions
In this tool, each column corresponds to a different research question. However, the AI-generated answers has a lot of repeating phrases, themes regardless of the question asked.
Limited Depth & Inaccurate Attribution
This AI-generated output gives a shallow summary of emotional experiences—missing nuance and depth. It also falsely refers to "some participants” when only one was interviewed, showcasing a lack of understanding.
Moderator Prompt Mistaken as Participant Insight
the AI tool misidentifies the moderator’s question as a participant insight—displaying the moderator’s words as if they were part of the respondent's answer. This is a fundamental inaccuracy in the synthesis.
Capturing Relevant, Nuanced Details
Reveal deeply understood each research question’s intent, delivering distinct, emotionally rich, and context-specific insights. Instead of generic summaries, Reveal’s output captures psychological nuance, narrative flow, and clear actions—directly tied to participants’ actual words.