Skip Navigation

A peer reviewed journal with nonsense AI images was just published

https:// twitter.com /kareem_carr/status/1758148011245371627
20
20 comments
  • There is no peer review with these scam publications. You pay your flat fee and get published. That's it. This is how climate change deniers and all other nut jobs get their studies too. This has been going on for years. This is a cute joke that cost roughly 3K https://www.frontiersin.org/journals/cell-and-developmental-biology/for-authors/publishing-fees

    26
  • This feels like clickbait to me, as the fundamental problem clearly isn't AI. At least to me it isn't. The title would have worked as well without AI in the title. The fact that the images are AI generated isn't even that relevant. What is worrying is that the peer review process, at least for this journal clearly is faulty as no actual review of the material took place.

    If we do want to talk about AI. I am impressed how well the model managed to actually create text made up of actual letters resembling words. From what I have seen so far that is often just as difficult for these models as hands are.

    25
    • Simplifying this down to an issue of just the review process flattens out the problem that generative AI does not think in the same way that generative human content does. There's additional considerations that need to be made when considering using generative AI, namely that generative AI does not have a sum of knowledge to pull from in order to keep certain ideas in check, such as how large an object should appear and it doesn't have the ability to fact check relevancy with other objects within the image.

      We need to think about these issues in depth because we are introducing a non-human, specific kind of bias into literature. If we don't think about it systematically we can't create a process which intends to limit or reduce the amount of bias introduced by allowing this kind of content. Yes, the review process can and should already catch a lot of this, but I'm not convinced that waving our hands and saying that review is enough is adequate to fully address the biases we may be introducing.

      I think there's a much higher chance of introducing bias or false information in highly specialized fields where the knowledge necessary to determine if something was generated incorrectly, since generative AI does not draw upon facts or fact check, is in fact, correct. Reviewers are not perfect, and may miss things. If we then draw upon this knowledge in the future to direct additional studies we might create a house of cards which becomes very difficult to undo. We already have countless examples of this in science where a study with falsified data or poor methodology breeds a whole field of research which struggles to validate the original studies and eventually needs to be retracted. We could potentially have situations in which the study is validated but an image influences how we even think (or can acquire funding for) a process should work. Having strong protections such as requiring that AI images be clearly notated that they were created via AI, can help to mitigate these kinds of issues.

      10
      • I totally see why you are worried about all the aspects AI introduces, especially regarding bias and the authenticity of generated content. My main gripe, though, is with the oversight (or lack thereof) in the peer review process. If a journal can't even spot AI-generated images, it raises red flags about the entire paper's credibility, regardless of the content's origin. It's not about AI per se. It is about ensuring the integrity of scholarly work. Because realistically speaking, how much of the paper itself is actually good or valid? Even more interesting, and this would bring AI back in the picture. Is the entire paper even written by a human or is the entire thing fake? Or maybe that is also not interesting at all as there are already tons of papers published with other fake data in it. People that actually don't give a shit about the academic process and just care about their names published somewhere likely already have employed other methods as well. I wouldn't be surprised if there is a paper out there with equally bogus images created by an actual human for pennies on Fiverr.

        The crux of the matter is the robustness of the review process, which should safeguard against any form of dubious content, AI-generated or otherwise. Which is what I also said in my initial reply, I am most certainly not waving hands and saying that review is enough. I am saying that it is much more likely the review process has already failed miserably and most likely has been for a while.

        Which, again to me, seems like the bigger issue.

        7
      • Rather the opposite: simplifying this down to an issue of just an AI introducing some BS, flattens out the problem that grifter journals don't follow a proper peer review process.

        introducing bias or false information in highly specialized fields

        Reviewers are not perfect, and may miss things

        It's called a "peer review" process for a reason. If there are not enough peers in a highly specialized field to conduct a proper review, then the article should stay on arxiv or some other preprint server until enough peers can be found.

        Journals that charge for "reviewing" BS, no matter if AI generated, or by a donkey with a brush tied to its tail, should be named and shamed.

        We already have countless examples of this in science where a study with falsified data or poor methodology breeds a whole field of research which struggles to validate the original studies and eventually needs to be retracted.

        ...and no AI was needed. Goes to show how AI is the red herring here.

        7
    • Modern AI image generators are pretty good with creating text (and hands). You're right that that's very recent though (like the last 6 months); they used to be bad at it.

      American classic car,1935 Ford Pickup, poster ,retro ,illustrator, vibrant colors,Idaho color view landscape, with words "Idaho"
      

      7
      • Oh huh, you are right. I threw that exact prompt in Dall-e and got indeed legible letters.

        6
  • I think the names of those you review papers should be released so you know who to blame when shit like this goes public.

    8
You've viewed 20 comments.