• 0 Posts
  • 155 Comments
Joined 1 year ago
cake
Cake day: June 12th, 2023

help-circle

  • I agree that’s it’s a “hate the game, not the player”. The issue is how much influence he could have to steer the market to favor his product vs. the competition. It’s happened so many times in history where the better product fails because they can’t play the game like the inferior company.

    To quote “Pirates of Silicon Valley”:

    Steve Jobs: We’re better than you are! We have better stuff.

    Bill Gates: You don’t get it, Steve. That doesn’t matter!

    So is it fair for the consumer for big companies to be able to influence the game itself and not just play within the same rules? I’d say no.


  • Sam started this. The comparisons would have come up anyway, but it’s a lot harder to dismiss the claims from users when your CEO didn’t tweet “her” before the release. I don’t myself think the voice in the demos sounded exactly like her, just closer in seamlessness and attitude, which is a problem itself down the road for easily convinced users.

    AI companions will be both great and dangerous for those with issues. Wow, it’s another AI safety path that apparently no company is bothering exploring.













  • Maybe your argument isn’t against Lemmy, but against online discussion in general. Heating debates that break into less constructive postings have been around since the days of BBSes and Usenet. I don’t disagree with your point that people should try to act like adults when discussing topics, but a (not so) different format doesn’t change how people are, especially when they feel protected by anonymity to react badly.


  • From the point of just moving the charge, yes, it’s called antimatter. Antielectrons are positive, antiprotons are negative. From the mass point of view though it would be a different kind of physics altogether since electrons have virtually no mass compared to the other two particles, and protons don’t exist as a particle-wave duality, so neither protons or electrons would act the same by just switching them out in a Bohr atom model arrangement. Maybe someone with more in depth knowledge can give additional or better reasons.


  • I have to credit ChatGPT4 for this answer.

    Credit, or a warning?

    From my understanding a big part of the problem with PET is the availability, either because it’s such a small percentage of plastic and demand is too great, or because it gets lost among all the rest and so is mixed or ruined for recycling.

    Honestly the debate on which material is better totally ignores the real problem - consumption demand. Reduce used to be the first ‘R’, but it was not friendly to the capitalistic mindset or an exploding population, so Recycling became the big focus along with the subtle blaming of the consumer for not being THE solution when they didn’t participate.


  • It’s partially because of cost, new plastic is cheaper than trying to recover old. But very few plastics can be truly recycled chemically, much being reformed for other purposes. Glass and metals were always a better environmental choice (with their own limitations too), but plastic is so cheap and versatile it’s hard to compete. Not just plastics - just a look around the household imagining the lack of petroleum products, it’s amazing how it’s everywhere. Yet another dead end we’ve gotten ourselves into.


  • There are two dangers in the current race to get to AGI and in developing the inevitable ANI products along the way. One is that advancement and profit are the goals while the concern for AI safety and alignment in case of success has taken a back seat (if it’s even considered anymore). Then there is number two - we don’t even have to succeed in AGI for there to be disastrous consequences. Look at the damage early LLM usage has already done, and it’s still not good enough to fool anyone who looks closely. Imagine a non-reasoning LLM able to manipulate any media well enough to be believable even with other AI testing tools. We’re just getting to that point - the latest AI Explained video discussed Gemini and Sora and one of them (I think Sora) fooled some text generation testers into thinking its stories were 100% human created. In short, we don’t need full general AI to end up with catastrophe, we’ll easily use the “lesser” ones ourselves. Which will really fuel things if AGI comes along and sees what we’ve done.