I think he was just imminently concerned about their safety. Like the post suggests, many thought desperate times were coming and any rando in a maga hat might retaliate.
I think he was just imminently concerned about their safety. Like the post suggests, many thought desperate times were coming and any rando in a maga hat might retaliate.
I hear eugenists say “there’s not enough people” in place of “we need more specially white folk so the uneducated colored don’t replace us” far far more often, see Elon musk and the like.
I’ve actually only ever heard “there’s too many people” come from anti capitalists.
At least the same company developed both in that case. As soon as a new open source AI model released, Elon just slapped it on wholesale and started charging for it
Collective mass arbitration is my favorite counter to this tactic, and is dramatically more costly for the company than a class action lawsuit.
https://www.nytimes.com/2020/04/06/business/arbitration-overload.html
A lot of companies got spooked a few years back and walked back their arbitration agreements. I wonder what changed for companies to decide it’s worth it again. Maybe the lack of discovery in the arbitration process even with higher costs?
The responses aren’t exactly deterministic, there are certain attacks that work 70% of the time and you just keep trying.
I got past all the levels released at the time including 8 when I was doing it a while back.
deleted by creator
Excuse me but, the fuck is wrong with you?
The reason that makes the most sense in one of the articles I’ve read is that they fired him after he tried to push out one of the board members.
Replacing that board member with an ally would have cemented control over the board for a time. They might not have felt his was being honest in his motives for the ousting, so it was basically fire now, or lose the option to fire him in the future.
Edit: https://www.nytimes.com/2023/11/21/technology/openai-altman-board-fight.html
I’ve definitely experienced this.
I used ChatGPT to write cover letters based on my resume before, and other tasks.
I used to give it data and tell chatGPT to “do X with this data”. It worked great.
In a separate chat, I told it to “do Y with this data”, and it also knocked it out of the park.
Weeks later, excited about the tech, I repeat the process. I tell it to “do x with this data”. It does fine.
In a completely separate chat, I tell it to “do Y with this data”… and instead it gives me X. I tell it to “do Z with this data”, and it once again would really rather just do X with it.
For a while now, I have had to feed it more context and tailored prompts than I previously had to.
There’s a much more accurate stat… and it’s disgusting