Abstract
Background: The advent of ChatGPT, an artificial intelligence (AI) model, has introduced new challenges in educational practices, particularly in the realm of scientific writing at higher educational institutions. The AI is trained on extensive datasets to generate scientific texts. Many professors and academicians express concerns about the inclusion of AI chatbots in project execution, interpretation, and writing within specialized subject curricula at the undergraduate and master’s levels.
Methods: To address these concerns, we employed the ChatGPT tool by posing a specific query “Gynecomastia and the risk of non-specific lung disease, along with associated risk factors for workers in the petrochemical industry”. We conducted a comparison between responses generated by ChatGPT and real-time output from master’s students, examining document-to-document variation on different dates.
Results and Discussion: The AI chatbot failed to identify potential risk factors, in contrast to the student response, which highlighted alteration in neutrophil levels, lung architecture, high IgE, elevated CO2 levels, etc. The two responses did not align in terms of context understanding, language nuances (words and phrases), and knowledge limitations (real-time access to information, creativity, and originality of the query). A plagiarism check using the iThenticate software reported similarity indices of 11% and 14%, respectively, in document-to-document analyses. The concerns raised by academicians are not unfounded, and the apprehension regarding students utilizing ChatGPT in the future revolves around ethical considerations, the potential for plagiarism, and the absence of laws governing the use of AI in medical or scientific writing.
Conclusion: While AI integration in the curriculum is feasible, it should be approached with a clear acknowledgement of its limitations and benefits. Emphasizing the importance of critical thinking and original work is crucial for students engaging with AI tools, addressing concerns related to ethics, plagiarism, and potential copyright infringement in medical or scientific writings.
Keywords: ChatGPT, artificial intelligence, gynecomastia, AI chatbot, neutrophil levels, plagiarism.
[http://dx.doi.org/10.1038/d41586-022-04397-7]
[http://dx.doi.org/10.1038/d41586-022-04437-2] [PMID: 36517680]
[http://dx.doi.org/10.1007/978-981-99-3608-3_15]
[http://dx.doi.org/10.1016/j.jretconser.2021.102735]
[http://dx.doi.org/10.1038/d41586-023-01026-9] [PMID: 37045954]
[http://dx.doi.org/10.15244/pjoes/169398]
[http://dx.doi.org/10.1016/j.dsx.2023.102744] [PMID: 36989584]
[http://dx.doi.org/10.2174/1872208316666220802151129] [PMID: 35927896]
[http://dx.doi.org/10.1038/s41586-023-05881-4] [PMID: 37045921]
[http://dx.doi.org/10.1101/2022.12.23.521610]
[http://dx.doi.org/10.5114/biolsport.2023.125623] [PMID: 37077800]
[http://dx.doi.org/10.3390/su15075614]
[http://dx.doi.org/10.1016/j.neuron.2021.12.018] [PMID: 35041804]
[http://dx.doi.org/10.2174/1574893618666230227105703]