The artificial intelligence (AI) model ChatGPT, which is capable to generate human-like responses, has recently raised paramount interest and attention due to its inherent capability to write scientific text. The current thinking is that AI cannot be granted the authorship of manuscripts because there is a lack of responsibility, though many scientists still believe that it could help the facility and speed of writing. The currently available online tools of generative AI do not easily allow to accurately acknowledge whether a given scientific text has been composed by human individuals or AI systems. To this end, some aspects could be analyzed, namely repetition (by means of plagiarism checks), style and tone, coherence and structure, context and accuracy, though we proffer that these distinguishing elements may be more nuanced in the foreseeable future. In this article we have also tested the capacity of some different generative AI systems to answer to an easy laboratory medicine query, concluding that the output does not exactly match a text written by a skilled scientist and the algorithms still present imprecisions, suggesting the need for better training. Authoring scientific articles implies skills that could only be developed after years of training and experience, involving a good knowledge of the argument and the ability to think creatively and make connections. At this point in time, generative AI systems could certainly assist scientific writing, but they cannot replace the knowledge, skill and creativity of a human writer.

Generative Artificial Intelligence in (laboratory) medicine: friend or foe?

Lippi, Giuseppe
2023-01-01

Abstract

The artificial intelligence (AI) model ChatGPT, which is capable to generate human-like responses, has recently raised paramount interest and attention due to its inherent capability to write scientific text. The current thinking is that AI cannot be granted the authorship of manuscripts because there is a lack of responsibility, though many scientists still believe that it could help the facility and speed of writing. The currently available online tools of generative AI do not easily allow to accurately acknowledge whether a given scientific text has been composed by human individuals or AI systems. To this end, some aspects could be analyzed, namely repetition (by means of plagiarism checks), style and tone, coherence and structure, context and accuracy, though we proffer that these distinguishing elements may be more nuanced in the foreseeable future. In this article we have also tested the capacity of some different generative AI systems to answer to an easy laboratory medicine query, concluding that the output does not exactly match a text written by a skilled scientist and the algorithms still present imprecisions, suggesting the need for better training. Authoring scientific articles implies skills that could only be developed after years of training and experience, involving a good knowledge of the argument and the ability to think creatively and make connections. At this point in time, generative AI systems could certainly assist scientific writing, but they cannot replace the knowledge, skill and creativity of a human writer.
2023
Artificial intelligence; Laboratory Medicine
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11562/1106366
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 4
  • ???jsp.display-item.citation.isi??? ND
social impact