Can the xFakeSci tool identify fake AI-generated content?

A cutting-edge AI tool that beats traditional methods in spotting AI-generated content like ChatGPT articles, helping to safeguard scientific research from plagiarism!

Study: Detection of ChatGPT fake science with the xFakeSci learning algorithm. Image Credit: dauf / Shutterstock.com

The growing use of generative artificial intelligence (AI) tools like ChatGPT has increased the risk of human-appearing content plagiarized from other sources. A new study published in Scientific Reports assesses the performance of xFakeSci in differentiating authentic scientific content from ChatGPT-generated content.

Threats posed to research by generative AI

AI generates content based on the supply of prompts or commands to direct its processing. Aided and abetted by social media, predatory journals have published fake scientific articles to lend authority to dubious viewpoints. This could be further exacerbated by publishing AI-generated content in actual scientific publications.

Previous research has emphasized the challenges associated with distinguishing AI-generated content from authentic scientific content. Thus, there remains an urgent need to develop accurate detection algorithms.

Aim and overview of the study

In the current study, researchers utilized xFakeSci, a novel learning algorithm that can differentiate AI-generated content from authentic scientific content. This network-driven label prediction algorithm encompasses both single and multi modes of operation that are trained using one and multiple types of resources, respectively.

During training, researchers used engineered prompts to identify fake documents and their distinctive traits with ChatGPT. Thereafter, xFakeSci was used to predict the document class and its genuineness.

Two types of network training models were based on ChatGPT-generated and human-written content obtained from PubMed abstracts. Both data sets were analyzed for articles on cancer, depression, and Alzheimer’s disease (AD).

Differences between two types of content

One of the striking differences between ChatGPT- and human-generated articles was the number of nodes and edges calculated from each type of content.

ChatGPT-generated content had significantly fewer nodes but a higher number of edges for a lower node-to-edge ratio. Moreover, AI-generated datasets had higher ratios for each of the k-Folds as compared to actual scientist-derived content on all three diseases.

Testing scores

After training and calibration, xFakeSci was tested on 100 articles for each disease, 50 each from PubMed and ChatGPT. F1 scores were calculated from the true positives, true negatives, false positives, and false negatives.

F1 scores of 80%, 91%, and 89% were obtained for articles on depression, cancer, and AD, respectively. Whereas all human-generated content was detected by xFakeSci, only 25, 41, and 38 of ChatGPT-generated documents on these three diseases, respectively, were accurately identified. ChatGPT-generated content was more accurately identified when mixed with older authentic articles for analysis in a mixed class.

ChatGPT is classified as PubMed with (FP (false positives) =25), indicating that 50% of the test documents are misclassified as real publications.”

Benchmarking xFakeSci

Against accepted or top 10 conventional data mining algorithms like Naïve Bayes, Support Vector Machine (SVM), Linear SVM, and Logistic Regression, xFakeSci scores remained between 80% and 91% for articles published between 2020 and 2024. In comparison, the other algorithms showed fluctuating performance, with scores ranging between 43% and 52%.

With earlier articles published between 2014-2019 and 2010-2014, the same disparity was observed for xFakeSci and other algorithms at 80-94% and 38%-52%, respectively. Thus, xFakeSci outperforms the other algorithms across all time periods.

Conclusions

The xFakeSci algorithm is particularly appropriate for multi-mode classification to test a mixed test set and produce accurate labels for each type. The inclusion of a calibration step based on ratios and proximity distances improves the classification aspect of this algorithm; however, it precludes the addition of excessive sample quantities.

The multi-mode classification aspect of xFakeSci allowed this algorithm to accurately identify real articles, even when mixed with ChatGPT-generated articles. However, xFakeSci was not as successful in identifying all ChatGPT-generated content.

Networks generated from ChatGPT were associated with a lower node-to-edge ratio, thus indicating their higher connectedness, which was accompanied by an increased ratio of bigrams to total word count for each document.

Since ChatGPT was developed to produce human-like content by predicting the next word on the basis of statistical correlations, its objectives do not agree with the scientific goals of documenting hypothesis testing, experimentation, and observations.  

The xFakeSci algorithm may have other applications, such as distinguishing potentially fake parts of ChatGPT-generated clinical notes, interventions, and summaries of clinical experiments. Nevertheless, ethical guidelines must be enforced to prevent the irresponsible use of generative AI tools, even while recognizing their benefits.

AI can provide simulated data, build segments of code for multiple programming applications, and assist in teaching, while helping to present scientific research in readable grammatical English for non-native speakers. However, AI-generated content may plagiarize research documents available online, which could interfere with scientific progress and learning. Thus, journal publishers have an important role in implementing detection algorithms and other technologies to identify counterfeit reports.

Future research could use knowledge graphs to cluster closely linked fields of publication to improve the accuracy of detection, training, and calibration, as well as test the performance of xFakeSci using multiple data sources.

Journal reference:
  • Hamed, A. A., & Wu, X. (2024). Detection of ChatGPT fake science with the xFakeSci learning algorithm. Scientific Reports. doi:10.1038/s41598-024-66784-6.
Dr. Liji Thomas

Written by

Dr. Liji Thomas

Dr. Liji Thomas is an OB-GYN, who graduated from the Government Medical College, University of Calicut, Kerala, in 2001. Liji practiced as a full-time consultant in obstetrics/gynecology in a private hospital for a few years following her graduation. She has counseled hundreds of patients facing issues from pregnancy-related problems and infertility, and has been in charge of over 2,000 deliveries, striving always to achieve a normal delivery rather than operative.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Thomas, Liji. (2024, September 06). Can the xFakeSci tool identify fake AI-generated content?. News-Medical. Retrieved on October 08, 2024 from https://www.news-medical.net/news/20240906/Can-the-xFakeSci-tool-identify-fake-AI-generated-content.aspx.

  • MLA

    Thomas, Liji. "Can the xFakeSci tool identify fake AI-generated content?". News-Medical. 08 October 2024. <https://www.news-medical.net/news/20240906/Can-the-xFakeSci-tool-identify-fake-AI-generated-content.aspx>.

  • Chicago

    Thomas, Liji. "Can the xFakeSci tool identify fake AI-generated content?". News-Medical. https://www.news-medical.net/news/20240906/Can-the-xFakeSci-tool-identify-fake-AI-generated-content.aspx. (accessed October 08, 2024).

  • Harvard

    Thomas, Liji. 2024. Can the xFakeSci tool identify fake AI-generated content?. News-Medical, viewed 08 October 2024, https://www.news-medical.net/news/20240906/Can-the-xFakeSci-tool-identify-fake-AI-generated-content.aspx.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of News Medical.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
Traditional Chinese medicine shows potential in cancer treatment