vintagedave a day ago

Curiously it focuses on overly descriptive phrasing, and factually incorrect statements, as signs of AI.

I don't think this is accurate. AI has a flavour or tone we all know, but it could have generated factually plausible statements (that you could not diagnose in this test) or plausible text.

I could not tell the real from fake music at all.

I support (and pay for) Kagi, but wasn't overly impressed here. At worst I think it might give people too much confidence. Wikipedia has a great guideline on spotting AI text and I think the game here should integrate and reflect its contents: https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing

  • benrutter a day ago

    > I could not tell the real from fake music at all.

    Not sure if it's in my head (haven't done a blind test or anything) but all AI music I've heard has painfully bad drum acoustics (very clicky). Seems like the most tell tale marker, although would love to make a "spot the AI song" game to prove myself right/wrong.

    • 1970-01-01 18 hours ago

      Yes, it's an uncanny valley when hearing sound details along with the digital artifact noise that gives it away. AI generated audio is lagging quite a bit behind AI video and images.

  • socalgal2 a day ago

    I found the music trivially distiguishable or at least I thought it was. Maybe I just got lucky. The AI songs had lyrics that, to me, seemed like something that wouldn't be in most songs. I've heard far better AI generated songs that I'd have a hard time distguishing if I wasn't told.

  • A_D_E_P_T a day ago

    Right. Its examples fall into categories like:

    - AI slop is trivially factually wrong, and frequently overconfident.

    - AI slop is verbose.

    But, as you note, IRL this is not usually the case. It might have been true in the GPT-3.5 or early GPT-4 days, but things have moved on. GPT-5.1 Pro can be laconic and is rarely factually wrong.

    The best way to identify AI slop text is by their use of special and nonstandard characters. A human would usually write "Gd2O3" for gadolinium oxide, whereas an AI would default to "Gd₂O₃". Chat-GPT also loves to use the non-breaking hyphen (U+2011), whereas all humans typically use the standard hyphen-minus character (U+002D). There's more along these lines. The issue is that the bots are too scrupulously correct in the characters they use.

    As for music, it can be very tough to distinguish. Interestingly, there are some genres of music that are entirely beyond the ability of AI to replicate.

    • daotoad a day ago

      > whereas all humans typically use the standard hyphen-minus character (U+002D).

      I made it a point to learn to type the em dash—only to have it stolen by the bots; it's forced me to become reacquainted with my long lost friend, the semicolon.

      • A_D_E_P_T a day ago

        Hah, well, there's that.

        But I was referring to the special hyphen that the AIs frequently use today, and which is a hallmark of AI generated text, as it's not on regular keyboards and difficult to access: https://en.wikipedia.org/wiki/Wikipedia:Non-breaking_hyphen

        They're also fond of this apostrophe: ’

        Whereas almost every human uses: '

    • Der_Einzige a day ago

      I'm like 99.99999% sure that the usage of non standard characters, em dash, fancy quotes, "it's not X, it's Y" etc was clearly done on purpose from the very beginning and pushed for by various parties who have a strong vested interest in monitoring who is using AI and how it's being used.

      I kneel Hideo Kojima. You saw this all coming: https://youtu.be/PnnP4sA80D8

    • vanderZwan a day ago

      > But, as you note, IRL this is not usually the case.

      Except for the huge the amounts of already generated slop that is combined with SEO to pop up in search results

      • visarga a day ago

        Oh, if you finetune GPT-4 on an author it assumes the style so well that people prefer it to human experts doing the same job

        > "Readers Prefer Outputs of AI Trained on Copyrighted Books over Expert Human Writers"

    • CamperBob2 a day ago

      Interestingly, there are some genres of music that are entirely beyond the ability of AI to replicate.

      Sounds interesting, what are some of those genres?

      • A_D_E_P_T a day ago

        You'll find this surprising, but Suno is completely and utterly incapable of generating ambient music like this: https://www.youtube.com/watch?v=SsA-jxQsx8I

        It simply doesn't get it. This sort of thing probably wasn't in its training data.

        The really interesting thing is that when I upload something like that track, and tell it to compose something similar, it usually gives me an error and refunds my credits.

        Also, and this is far more mainstream, both Suno and ElevenLabs are totally incapable of generating anything like, e.g., Darkthrone's "Transylvanian Hunger." Music that is intentionally unpolished is anathema to them.

        I could go on. There are lots. I think that they understand melody and harmony, but they don't understand atmosphere, just in general...

  • ignoramous a day ago

    > I support (and pay for) Kagi, but wasn't overly impressed here

    This website strikes me as merely a marketing gimmick.

    • ants_everywhere a day ago

      Most likely they see AI as a competitor to search and are trying to survive by pandering to the anti-AI movement

  • beepbooptheory a day ago

    > I could not tell the real from fake music at all.

    Perhaps this is just a sign for you to listen to more (human) music is all!

  • alfon a day ago

    But then it wouldn't classify as slop?

hrimfaxi a day ago

Ironically, it seems the descriptions are AI-written?

(minor spoiler)

The text accompanying an image of a painting:

> This image shows authentic human photography with natural imperfections, consistent lighting, and realistic proportions that indicate genuine capture rather than artificial generation. Meindert Hobbema. The Avenue at Middelharnis (1689, National Gallery, London)

  • phreack a day ago

    What bugs me the most about nearly everyone selling AI products is that they apparently want or need to believe in the power of LLMs for everything, not just the product, and this means that they also generate the explanatory texts and descriptions and readmes and... it makes the product itself feel of a much worse quality.

    I don't mind that you're selling an AI product if it's good but at least put some humanity on the marketing side.

zipping1549 21 hours ago

I have been a paid Kagi user for over two years, and this is really, really bad. Just because something is verbose or- I just dont understand how they came up with this-wrong doesn't mean it came from an LLM.

Dwedit a day ago

Just because information is wrong doesn't mean it's AI generated, people can make up wrong answers too.

ottah a day ago

I will never understand why people are so obsessed with this. You don't like it, dont engage with it. If you can't tell the difference, and it's entertainment stop worrying about it.

If veracity matters, use authorative sources. Nothing has really changed about the skills needed for media literacy.

  • scubbo a day ago

    > If veracity matters, use authorative sources

    So having a good heuristic for identifying a broad category of non-authoritative sources would be useful, then?

  • bondarchuk a day ago

    But you have to engage with it before you can find out whether you like it.

    >If you can't tell the difference, and it's entertainment stop worrying about it.

    At the end of the day it's a philosophical/existential choice. Not everyone would step into the awesome-life-simulator where you can't tell the difference. On similar grounds one might decide on principle to consume only human-made media, be a part of the dynamical system that is real human culture.

    • ottah 5 hours ago

      We have always been in the life simulator philosophically speaking. Everything is a construct, and the universe is mostly an existential horror. You're only chasing misery by trying to be the information vegan.

iterance a day ago

Correctness is a poor way to distinguish between human-authored and AI-generated content. Even if it's right, which I doubt (can humans not make wrong statements?), it doesn't do anything to help someone who doesn't know much about what they're searching.

pbaehr a day ago

I feel like this is a good educational goal but a very poor execution.

We're meant to assume correct sentences were written by humans and AI adds glaring factual errors. I don't think it is possible at this point to tell a single human written sentence from an AI written sentence with no other context and it's dangerous to pretend it is this easy.

Several of the AI images included obvious mistakes a human wouldn't have made, but some of them also just seemed like entirely plausible digital illustrations.

Oversimplifying generative AI identification risks overconfidence that makes you even easier to fool.

Loosely related anecdote: A few months ago I showed an illustration of an extinct (bizarre looking) fish to a group of children (ages 10-13ish). They immediately started yelling that it was AI. I'm glad they are learning that images can be fake, but I actually had to explain that "Yes, I know this is not a photo. This animal is long extinct and this is what we think it looked like so a person drew it. No one is trying to fool you."

  • AJ007 a day ago

    Kind of reminds me of the junk forensic fire science. "Slop Detective" might have been nice in 2022, now its slop itself. Maybe this is an old link? If someone just published this in the last 90 days, they are an idiot.

    There's a lot of anti-AI sentiment in the art world (not news) but real artists are now actively accused of using AI and getting kicked off reddit or whatever. That tells me there is going to be 0 market for 100% human created art, not the other way around.

jedbrooke a day ago

I got tripped up by this one (sorry for spoilers):

> Bees collect pollen from flowers and make honey. They also drive tiny cars to get from flower to flower!

The explanation given is that it’s not factually correct, therefore it’s AI slop. Maybe I didn’t pay enough attention to the instructions, but aren’t humans also capable of creating text that is not factually correct, and at times is done so not out of ignorance for for artistic or humorous purposes? This example here sounds like something that would be written by a child with an active imagination, and not likely the kind of “seems plausible but is actually false” slop that LLMs come up with.

__jonas a day ago

Not sure where to submit a bug report but I chose the option for kids and got this as the 'correct' message for a painting:

> Correct! Well done, detective!

> This image shows authentic human photography with natural imperfections, consistent lighting, and realistic proportions that indicate genuine capture rather than artificial generation.

> Albert Pinkham Ryder, Seacoast in Moonlight (1890, the Phillips Collection, Washington)

The image is not photography, I guess technically it's a photograph of a painting but still, confusing text.

neilv a day ago

> Here's why: This image shows authentic human photography with natural imperfections, consistent lighting, and realistic proportions that indicate genuine capture rather than artificial generation.

This sounds to me like a message is "poor fakes are generated, and everything else is genuine", which I think would be a very counterproductive message, even now.

yesfitz a day ago

I like the idea, but I think the game progression needs another pass from a designer.

I started on "Level 1" and got 2 things wrong (both false positives if it matters) and instead of feeling like I learned anything, I felt as though I was set up to fail because the image prompt was missing sufficient context or the text prompt was too simple to be human. Either I was dumb or the game was dumb.

Maybe I'm just too old and 8-11 year-old kids wouldn't be so easily discouraged, but I'd recommend:

1. Picking on one member of the "slop syndicate" at a time.

2. Show some examples (evidence) before beginning the evaluation.

Tiberium a day ago

The idea is great; the actual implementation is, frankly, horrible.

First of all, there are only 27 "slop" image examples, but 200 real ones - very bad ratio. And almost all real examples are just dated photographs, paintings, photos of old books - there are genuinely 0 (not joking) modern photos or digital artwork. Also multiple "slop" image examples were actual screenshots of ChatGPT interface or clearly cropped screenshots.

Text is even worse - they somehow present it as if LLMs cannot write factually correct or simple text.

I genuinely believe that they should take this down immediately and do a major rework, because at this stage it will only do harm. It might teach the children or adults who complete this that AI can never write factually correct text or create very realistic-looking photos (good luck with with Nano Banana Pro).

P.S. To see how bad it is, just scrape https://slopdetective.kagi.com/data/images/not_slop/{file} from image_001.webp to 200 and slop/image_001.webp to 027.

Also see https://slopdetective.kagi.com/data/text/slop/l3_lines.json and https://slopdetective.kagi.com/data/text/not_slop/l3_lines.j... for real vs LLM-written text.

  • plorg a day ago

    Clicking through 4 examples I found it hard to understand even what I was looking at. All 4 appeared to just be garbage that a human could get Canva to shit out in a couple of minutes, but the features that put them in the AI Slop bucket were things that identified the "slop", not the AI.

charcircuit a day ago

>What is "Slop":

>Fake stuff made by computers that tries to look like it was made by real people. It's everywhere online!

Tricking people is not what makes it slop. Being low quality is what makes it slop. This is a dangerous definition as it could mean that anything AI generated could be considered slop, even if it was higher quality than regular things.

  • archargelod 20 hours ago

    For me, AI slop is anything that was "vibed" with AI. If all it takes is writing a prompt and pressing a button - it's a sloppy, lazy and should not be the end product, period.

    But, you can take what AI generates, refine it, change it or use only parts of it, fact check it, etc. etc. Now, it's still AI generated, but not a "slop".

    AI can do 90% of your work, but the other 90% is still your job if you want someone else to care about it.

  • ImPleadThe5th a day ago

    Even if AI poetry/music/movies gets really high quality, it's still gonna be slop to me.

1970-01-01 19 hours ago

Sorry Kagi, but this heavily autotuned, non-AI music fits everyone's definition of slop. I demand you mark it as such.

moralestapia a day ago

>Water is wet. Wetness is what water has. What makes water water is that it's wet. The wetness of water means water is wet. So water has wetness.

>This was actually AI-generated slop! Repeats 'water is wet' multiple times.

I didn't know writing "water is wet" repeatedly was enough to de-humanize you.

>In many situations, it could be argued that grass may sometimes appear to have a greenish quality, though this might not always be the case.

>This was actually AI-generated slop! Won't commit to 'grass is green' and uses uncertain words.

What? Not all grass is green.

Fun times ahead.

beepbooptheory a day ago

The first music sample for 8th-12th graders I got which was like this charming instrumental chiptune-ey piece was so good (and real) but I can't find it again and forgot to try and shazam it or something. Did anyone else get it/know the song?

chemotaxis a day ago

Yeah, I don't like this. By insisting that the main tell of AI writing is that it's comically inaccurate, it accidentally enforces a pretty bad association: that if something looks accurate, it's not AI slop.

I think you gotta start with a definition of what AI slop is and why it matters. Most of what LLMs generate is not obviously incorrect.

modzu a day ago

the whole site itself is slop... lol

hey kids, learn about ai slop by reading this guide to ai slop written by ai and full of ai slop mistakes. sheesh