• Tikiporch@lemmy.world
    link
    fedilink
    English
    arrow-up
    16
    ·
    18 hours ago

    The problem is, if you ask a economist how they would implement sweeping tariffs, steep across the board, the answer would be “Please don’t.” It’s such a stupid fucking idea, every answer is wrong.

    • ThrowawayPermanente@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      58 minutes ago

      They also asked ChatGPT about economists, and after conducting an exhaustive study on reddit and Twitter it reported back that everyone agrees economists are bad, mean, and wrong, and that you should just do what you think is best and everything will turn out fine, the same way it always has.

    • theneverfox@pawb.social
      link
      fedilink
      English
      arrow-up
      4
      ·
      12 hours ago

      They probably also wouldn’t set a specific tariff for an uninhabited island even if they did it under protest

    • Jhex@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      18 hours ago

      Perfectly illustratibg how current “Ai” maybe an OK assistant to a trained professional for low level, mundane tasks… It cannot get close to replace the actual trained professional

  • ArbitraryValue@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    28
    arrow-down
    2
    ·
    22 hours ago

    I think that if the AI had been running the country, it wouldn’t have suggested crashing the American economy and potentially that of the rest of the world in the first place, but if you ask it stupid questions then you’ll get stupid answers.

    • Optional@lemmy.world
      link
      fedilink
      English
      arrow-up
      23
      arrow-down
      2
      ·
      21 hours ago

      You seem to think AI understands anything. It literally does not understand anything.

      • ArbitraryValue@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        11
        ·
        21 hours ago

        It understands relationships between concepts, which is something that can be learned from reading text even without firsthand experience of the world. “Tariffs” is associated with “recession” and “recession” is associated with “bad”.

        • shalafi@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          12 hours ago

          “Tariffs” is associated with “recession” and “recession” is associated with “bad”.

          Nailed it. ChatGPT gave a pretty balanced definition, but at least it popped out “bad”.

          And if you put in Smoot-Hawley:

          Ah, the Smoot-Hawley Tariff Act — one of the most infamous tariff laws in U.S. history. It’s a textbook case of how tariffs can go very wrong.

          These people responding think you think AI is thinking. See, because they’re smarter than you! This place fucking annoys hell out of me sometimes, just like old reddit. At least we’re not run over with bots and fascists.

        • Optional@lemmy.world
          link
          fedilink
          English
          arrow-up
          12
          arrow-down
          1
          ·
          21 hours ago

          Sort of. It understands “0.0023” is associated with “0.0037” and “0.0037” is associated with “0.15532”

          • ArbitraryValue@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            11
            ·
            edit-2
            21 hours ago

            Yes, but I don’t see that as particularly significant in this context. Information, including the knowledge of economic theory stored in a human brain, can be represented digitally. The fact that a present-day AI presumably can’t actually experience what it’s like to be unhappy as prices rise and incomes fall doesn’t affect its ability to reason about economics.

            • Optional@lemmy.world
              link
              fedilink
              English
              arrow-up
              13
              ·
              20 hours ago

              We should probably just agree to disagree. I think the strides made in AI are at the very least impressive and have made some things (text-to-speech, for example) better - if not enormously then at least noticeably.

              But there isn’t a true analog to be had between calculated probabilities and conscious thought. The former is a mimic of varied competence, but has no logic inherent to it. It requires human maintenance, it’s only path to “growth” if we want to call it that, is a black-box of infinite probabilities it calculates at incredible speed.

              It’s a super-magic-8-ball that we choose to pretend has agency of some sort. But it does not.

    • alanjaow@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      1
      ·
      22 hours ago

      It’s not the lighter’s fault if someone uses it to burn down a forest. Especially if the lighter is yelling the whole time that it’s a bad idea to burn down the forest!

      • kozy138@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        17 hours ago

        But it would be partly the lighters fault if it used up more power and water than most countries do.

        • BearGun@ttrpg.network
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 hours ago

          No? A lighter is a tool, it has no agency and as such can not carry blame. You can argue that the fault lies partly in the lap of the lighter’s creator, but not the lighter itself.

  • hark@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    19 hours ago

    Something that I’d read as a kid in a work of fiction and would think is cool is actually dogshit in practice. It’s no wonder I’m so pessimistic.