

I often ask ChatGPT for a second opinion, and the responses range from “not helpful” to “good point, I hadn’t thought of that.” It’s hit or miss. But just because half the time the suggestions aren’t helpful doesn’t mean it’s useless. It’s not doing the thinking for me - it’s giving me food for thought.
The problem isn’t taking into consideration what an LLM says - the problem is blindly taking it at its word.
You opened with a flat dismissal, followed by a quote from Reddit that didn’t explain why horseshoe theory is wrong - it just mocked it. That’s not an argument, that’s posturing.
From there, you shifted into responding to claims I never made. I didn’t argue that AI is flawless, inevitable, or beyond criticism. I pointed out that reflexive, emotional overreactions to AI are often as irrational as the blind techno-optimism they claim to oppose. That’s the context you ignored.
You then assumed what I must believe, invited yourself to argue against that imagined position, and finished with vague accusations about me “pushing acceptance” of something people “clearly don’t want.” None of that engages with what I actually said.