🇨🇦🇩🇪🇨🇳张殿李🇨🇳🇩🇪🇨🇦

My Dearest Sinophobes:

Your knee-jerk downvoting of anything that features any hint of Chinese content doesn’t hurt my feelings. It just makes me point an laugh, Nelson Muntz style as you demonstrate time and again just how weak American snowflake culture really is.

Hugs & Kisses, 张殿李

  • 6 Posts
  • 90 Comments
Joined 1 year ago
cake
Cake day: November 14th, 2023

help-circle




  • I did something with Perplexity as a test. I asked it a complicated question (which it botched because despite being “search-driven” it searches like a grandma using Google for the first time, and I mean the current slop-based Google). After giving it more information to finally get the right topic, I started asking questions designed to elicit a conclusion. Which it gave. And it gives you the little box saying what steps it’s supposedly following while it works.

    Then I asked it to describe the processes it used to reach its conclusion.

    Guess which of these occurred:

    1. The box describing the steps it was following matched the description of the process at the end.
    2. The two items were so badly mismatched it was like two different AIs were describing a process they’d heard about over a broken phone line.

    Edited to add:

    I was out of the number of “advanced searches” I’m allowed on the free tier, so I did this manually.

    Here is a conversation illustrating what I’m talking about.

    Note that I asked it twice directly, and once indirectly, to explain its thinking processes. Also note that:

    • Each time it gave different explanations (radically different!).
    • Each time it came up with similar, but not the same, conclusions.
    • When I called it out at the end it once again described the “process” it used … but as you can likely guess from the differences in previous descriptions it’s making even that part up!

    “Reasoning” AI is absolutely lying and absolutely hallucinating even its own processes. It can’t be trusted any more than autocorrect. It cannot understand anything which means it cannot reason.


  • Seems like none of these refer to information lookup services but to generative ai

    I use Google search, gemini, and local llms of various models.

    THIS is why I’m so fucking pissed at people like you.

    You can’t even keep your stories straight from one post to another. You hallucinate more than all the LLMs of the world put together.

    So, I put up my receipts for the claim that this shit is making you weaker. As far as I’m concerned, especially with your little instant-contradiction thing here, that pretty much establishes that you’re full of shit.

    …and assuming I am an ai bro…

    Oh, my. Someone coming to “Fuck AI” and defending AI for absurd amounts of time, who ignores evidence provided to counter his claims about the benefits of AI … why would anybody identify this as an aibrodude?

    It’s a mystery.

    But it’s not one that’s going to be solved with any more of my fucking time.


  • I mean there is an art form where you just take cut up elements of art (photos of eyes, face portions, etc.) and carefully paste them together into a piece of art. And that’s still art. Because an artist actually thought about these elements, took them, pasted them, all for specific effect. (And like all arts there’s good exemplars and bad ones.) Someone doing the same thing with random pieces kinda/sorta thrown together isn’t making art, however.

    AI picture generators are like the latter one: just taking random pieces of shit, and sticking them together. It is not making them for specific effect because it doesn’t know what specific effect even MEANS. It pairs certain collections of picture elements with certain words, randomly throws them together without any regard for the whole, and slops that onto the screen.

    Which is why you get bizarre incoherences like belts not continuing when a strand of hair crosses over top, say, or buttons that button nothing in random locations, or the infamous cthonic fingers of doom. There’s no comprehension.










  • But I guess it is “ethically” sourced as they kinda asked by making it opt out, I guess.

    No.

    As your mother’s case shows, making it “opt out” is emphatically not the ethical choice. It is the grifter’s choice because it comes invariably paired with difficult-to-find settings and explanations that sound like they come from a law book as dictated by someone simultaneously drunk and tripping balls.

    The only ethical option is “opt in”. This means people give informed consent (or if they don’t bother to read and just click OK at least they get consented hard like they deserve). This means you have to persuade that the choice is good for them and not just for the service provider.

    TL;DR: Opt-in is the way you do things without icky “I don’t understand consent” vibes.