Effects of Different Prompts on the Quality of GPT-4 Responses to Dementia Care Questions

  • Zhuochun Li
  • , Bo Xie
  • , Robin Hilsabeck
  • , Alyssa Aguirre
  • , Ning Zou
  • , Zhimeng Luo
  • , Daqing He

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Evidence suggests that different prompts lead large language models (LLMs) to generate responses with varying quality. Yet, little is known about prompts' effects on response quality in health care domains. In this exploratory study, we address this gap, focusing on a specific healthcare domain: dementia caregiving. We first developed an innovative prompt template with three components: (1) system prompts (SPs) featuring 4 different roles; (2) an initialization prompt; and (3) task prompts (TPs) specifying different levels of details, totaling 12 prompt combinations. Next, we selected 3 social media posts containing complicated, real-world questions about dementia caregivers' challenges in 3 areas: memory loss and confusion, aggression, and driving. We then entered these posts into G PT-4, with our 12 prompts, to generate 12 responses per post, totaling 36 responses. We compared the word count of the 36 responses to explore potential differences in response length. Two experienced dementia care clinicians on our team assessed the response quality using a rating scale with 5 quality indicators: factual, interpretation, application, synthesis, and comprehensiveness (scoring range: 0-5; higher scores indicate higher quality). Both clinicians rated the responses from 3 to 5, with 75% agreement. Consensus was reached through discussion. Overall, 44% of responses (16/36) were rated as 5; another 44% (16/36), as 4; the remaining 4 (11 %), as 3. We found no interaction effect of system and task prompts or main effect of system prompts on response length. Task prompts had a statistically significant effect on response length: F(2,24) = 82.784, p <.001. Post hoc analysis showed that the significant difference in responses was due to TP3, which led to significantly longer responses. There was no interaction or main effect of system and task prompts on response quality. Our clinicians' qualitative feedback provided further insight: (1) system prompts with the different professional roles (neuropsychologist and social worker) did not lead to noticeable differences in response content (that is, there were no neuropsychology- and social work-versions of GPT-4 responses); and (2) TP3, while producing longer responses statistically, might not necessarily have produced higher quality responses clinically: at times the details contained in the lengthy responses seem unnecessary from a clinical perspective. We discuss study limitations and future research directions.

Original languageEnglish (US)
Title of host publicationProceedings - 2024 IEEE 12th International Conference on Healthcare Informatics, ICHI 2024
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages412-417
Number of pages6
ISBN (Electronic)9798350383737
DOIs
StatePublished - 2024
Event12th IEEE International Conference on Healthcare Informatics, ICHI 2024 - Orlando, United States
Duration: Jun 3 2024Jun 6 2024

Publication series

NameProceedings - 2024 IEEE 12th International Conference on Healthcare Informatics, ICHI 2024

Conference

Conference12th IEEE International Conference on Healthcare Informatics, ICHI 2024
Country/TerritoryUnited States
CityOrlando
Period6/3/246/6/24

Keywords

  • dementia
  • informal caregiving
  • large language models
  • prompt engineering
  • social media

ASJC Scopus subject areas

  • Artificial Intelligence
  • Computer Vision and Pattern Recognition
  • Information Systems and Management
  • Statistics, Probability and Uncertainty
  • Health Informatics

Fingerprint

Dive into the research topics of 'Effects of Different Prompts on the Quality of GPT-4 Responses to Dementia Care Questions'. Together they form a unique fingerprint.

Cite this