The artificial intelligence (AI) model ChatGPT, which is capable to generate human-like responses, has recently raised paramount interest and attention due to its inherent capability to write scientific text. The current thinking is that AI cannot be granted the authorship of manuscripts because there is a lack of responsibility, though many scientists still believe that it could help the facility and speed of writing. The currently available online tools of generative AI do not easily allow to accurately acknowledge whether a given scientific text has been composed by human individuals or AI systems. To this end, some aspects could be analyzed, namely repetition (by means of plagiarism checks), style and tone, coherence and structure, context and accuracy, though we proffer that these distinguishing elements may be more nuanced in the foreseeable future. In this article we have also tested the capacity of some different generative AI systems to answer to an easy laboratory medicine query, concluding that the output does not exactly match a text written by a skilled scientist and the algorithms still present imprecisions, suggesting the need for better training. Authoring scientific articles implies skills that could only be developed after years of training and experience, involving a good knowledge of the argument and the ability to think creatively and make connections. At this point in time, generative AI systems could certainly assist scientific writing, but they cannot replace the knowledge, skill and creativity of a human writer.
Generative Artificial Intelligence in (laboratory) medicine: friend or foe?
Lippi, Giuseppe
2023-01-01
Abstract
The artificial intelligence (AI) model ChatGPT, which is capable to generate human-like responses, has recently raised paramount interest and attention due to its inherent capability to write scientific text. The current thinking is that AI cannot be granted the authorship of manuscripts because there is a lack of responsibility, though many scientists still believe that it could help the facility and speed of writing. The currently available online tools of generative AI do not easily allow to accurately acknowledge whether a given scientific text has been composed by human individuals or AI systems. To this end, some aspects could be analyzed, namely repetition (by means of plagiarism checks), style and tone, coherence and structure, context and accuracy, though we proffer that these distinguishing elements may be more nuanced in the foreseeable future. In this article we have also tested the capacity of some different generative AI systems to answer to an easy laboratory medicine query, concluding that the output does not exactly match a text written by a skilled scientist and the algorithms still present imprecisions, suggesting the need for better training. Authoring scientific articles implies skills that could only be developed after years of training and experience, involving a good knowledge of the argument and the ability to think creatively and make connections. At this point in time, generative AI systems could certainly assist scientific writing, but they cannot replace the knowledge, skill and creativity of a human writer.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.