Medical chatbots are becoming a basic component in telemedicine, propelled by advancements in Large Language Models (LLMs). However, LLMs’ integration into clinical settings comes with several issues, with privacy concerns being particularly significant. The paper proposes a tailored architectural solution and an information workflow that address privacy issues, while preserving the benefits of LLMs. We examine two solutions to prevent the disclosure of sensitive information: (i) a filtering mechanism that processes sensitive data locally but leverage a robust OpenAI’s online LLM for engaging with the user effectively, and (ii) a fully local deployment of open-source LLMs. The effectiveness of these solutions is assessed in the context of hypertension management across various tasks, ranging from intent recognition to reliable and emphatic conversation. Interestingly, while the first solution proved to be more robust in intent recognition, an evaluation by domain experts of the models’ responses, based on reliability and empathetic principles, revealed that two out of six open LLMs received the highest scores. The study underscores the viability of incorporating LLMs into medical chatbots. In particular, our findings suggest that open LLMs can offer a privacy-preserving, yet promising, alternative to external LLM services, ensuring safer and more reliable telemedicine practices. Future efforts will focus on fine-tuning local models to enhance their performance across all tasks.
Privacy-preserving LLM-based chatbots for hypertensive patient self-management
Sara Montagna
;Stefano Ferretti;Lorenz Cuno Klopfenstein;
2025
Abstract
Medical chatbots are becoming a basic component in telemedicine, propelled by advancements in Large Language Models (LLMs). However, LLMs’ integration into clinical settings comes with several issues, with privacy concerns being particularly significant. The paper proposes a tailored architectural solution and an information workflow that address privacy issues, while preserving the benefits of LLMs. We examine two solutions to prevent the disclosure of sensitive information: (i) a filtering mechanism that processes sensitive data locally but leverage a robust OpenAI’s online LLM for engaging with the user effectively, and (ii) a fully local deployment of open-source LLMs. The effectiveness of these solutions is assessed in the context of hypertension management across various tasks, ranging from intent recognition to reliable and emphatic conversation. Interestingly, while the first solution proved to be more robust in intent recognition, an evaluation by domain experts of the models’ responses, based on reliability and empathetic principles, revealed that two out of six open LLMs received the highest scores. The study underscores the viability of incorporating LLMs into medical chatbots. In particular, our findings suggest that open LLMs can offer a privacy-preserving, yet promising, alternative to external LLM services, ensuring safer and more reliable telemedicine practices. Future efforts will focus on fine-tuning local models to enhance their performance across all tasks.File | Dimensione | Formato | |
---|---|---|---|
1-s2.0-S2352648325000133-main-2.pdf
accesso aperto
Tipologia:
Versione editoriale
Licenza:
Creative commons
Dimensione
2.09 MB
Formato
Adobe PDF
|
2.09 MB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.