Large language models (LLMs) have driven significant advancements in various natural language processing (NLP) tasks, revealing to be a main component in the design of conversational agents. In this paper, we focus on the development of a chatbot aimed at supporting patients in managing their health conditions. In this context, while LLMs are ideal for chatbot development, relying on remote services raises concerns about privacy, reliability and high deployment costs. Smaller models offer a more practical alternative, but they often produce suboptimal results in in-context learning, especially when only limited conversational data are available. To address these challenges, we propose a pipeline for fine-tuning smaller models, thereby enabling style transfer toward physician-like replies. A key component of this pipeline is a data augmentation module that leverages LLMs to generate synthetic data, thus expanding the typically small original dataset of pairs patient questions-physician answers. We evaluate this approach on a hypertension-related conversational dataset, showing that the fine-tuned model outperforms the baseline both in automatic metrics and human evaluation.

A Fine-Tuning Pipeline with Small Conversational Data for Healthcare Chatbot

Viroli, Mirko;Montagna, Sara
2025

Abstract

Large language models (LLMs) have driven significant advancements in various natural language processing (NLP) tasks, revealing to be a main component in the design of conversational agents. In this paper, we focus on the development of a chatbot aimed at supporting patients in managing their health conditions. In this context, while LLMs are ideal for chatbot development, relying on remote services raises concerns about privacy, reliability and high deployment costs. Smaller models offer a more practical alternative, but they often produce suboptimal results in in-context learning, especially when only limited conversational data are available. To address these challenges, we propose a pipeline for fine-tuning smaller models, thereby enabling style transfer toward physician-like replies. A key component of this pipeline is a data augmentation module that leverages LLMs to generate synthetic data, thus expanding the typically small original dataset of pairs patient questions-physician answers. We evaluate this approach on a hypertension-related conversational dataset, showing that the fine-tuned model outperforms the baseline both in automatic metrics and human evaluation.
2025
978-3-031-95841-0
File in questo prodotto:
File Dimensione Formato  
978-3-031-95841-0_1.pdf

solo utenti autorizzati

Tipologia: Versione editoriale
Licenza: Copyright (tutti i diritti riservati)
Dimensione 318.01 kB
Formato Adobe PDF
318.01 kB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11576/2757611
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact