• 0 Posts
  • 66 Comments
Joined 1 year ago
cake
Cake day: June 15th, 2023

help-circle






  • I’ve gamed on Linux for the past 5 years. If you use Steam, most stuff works out of the box after you enable a single setting. Now that the linux gaming community is growing it’s easier to find workarounds for the games that don’t work. The only games that are hopelessly broken right now are games with intrusive anti-cheats that don’t support Linux. You can head over to protondb.com and check compatibility status for your games, including workarounds when necessary.

    If you don’t use Steam, then I’m not sure. Last time I played non-Steam games there was more troubleshooting and tweaking required but it’s been a couple of years and I don’t know the current state. It’s worth noting that Valve’s compatibility layer, Proton, is open-source and based on other open-source projects. There’s work currently being done to port the functionality outside of Steam. Hopefully, this will mean that in the future all launchers will behave similarly.

    But that’s just the software side of things. Don’t forget to check how your hardware works on Linux as well.



  • It’s funny because you’re making the opposite point of the one you think you’re making. Cause if you put together the two pieces of information from your comment, the entire picture is:

    Open ai makes a deal to pay media org for there content and makes it so they can link back to original article, with the money they make from stealing everybody else’s content

    That’s already pretty bad, even without that points you neglected to mention, like how some of the content that is indirectly making money for Ars Technica is stolen from their competitors, or how Ars Technica basically became a worthless journalistic source for AI at a time where public opinion is not yet settled on its morality and precedent has not been set on its legality. How is this not “sold out” to you?


  • This was a Discord dumpster fire that was thankfully put out months ago.

    Right, but the original mail from FDO basically said “we know about these examples of bad behavior, we want to notify you that they are definitely unacceptable and we expect to never see something like it again”. And Vaxry had a meltdown over that. Among other things, he doesn’t get why he should be held accountable for behaviors outside FDO. He has also rejected and commented negatively on the idea of any code of conduct at all for his project. Vaxry is making it as clear as possible that he will make zero commitment to oppose toxicity in his community and people took his word for it. The idea that he was punished solely for a couple of comments that happened years ago and are definitely “fixed” is Vaxry’s own misleading interpretation.




  • For someone to work it out, they would have to be targeting you specifically. I would imagine that is not as common as, eg, using a database of leaked passwords to automatically try as many username-password combinations as possible. I don’t think it’s a great pattern either, but it’s probably better than what most people would do to get easy-to-remember passwords. If you string it with other patterns that are easy for you to memorize you could get a password that is decently safe in total.

    Don’t complicate it. Use a password manager. I know none of my passwords and that’s how it should be.

    A password manager isn’t really any less complicated. You’ve just out-sourced the complexity to someone else. How have you actually vetted your password manager and what’s your backup plan for when they fuck up?






  • Imagine you were asked to start speaking a new language, eg Chinese. Your brain happens to work quite differently to the rest of us. You have immense capabilities for memorization and computation but not much else. You can’t really learn Chinese with this kind of mind, but you have an idea that plays right into your strengths. You will listen to millions of conversations by real Chinese speakers and mimic their patterns. You make notes like “when one person says A, the most common response by the other person is B”, or “most often after someone says X, they follow it up with Y”. So you go into conversations with Chinese speakers and just perform these patterns. It’s all just sounds to you. You don’t recognize words and you can’t even tell from context what’s happening. If you do that well enough you are technically speaking Chinese but you will never have any intent or understanding behind what you say. That’s basically LLMs.


  • Just because something is available to view online does not mean you can do anything you want with it. Most content is automatically protected by copyright. You can use it in ways that would otherwise by illegal only if you are explicitly granted permission to do so.

    Specifically, Stack Overflow licenses any content you contribute under the CC-BY-SA 4.0 (older content is covered by other licenses that I omit for simplicity). If you read the license you will note two restrictions: attribution and “share-alike”. So if you take someone’s answer, including the code snippets, and include it in something you make, even if you change it to an extent, you have to attribute it to the original source and you have to share it with the same license. You could theoretically mirror the entire SO site’s content, as long as you used the same licenses for all of it.

    So far AI companies have simply scraped everything and argued that they don’t have to respect the original license. They argue that it is “fair use” because AI is “transformative use”. If you look at the historical usage of “transformative use” in copyright cases, their case is kind of bullshit actually. But regardless of whether it will hold up in court (and whether it should hold up in court), the reality is that AI companies are going to use everybody’s content in ways that they have not been given permission to do so.

    So for now it doesn’t matter whether our content is centralized or federated. It doesn’t matter whether SO has a deal with OpeanAI or not. SO content was almost certainly already used for ChatGPT. If you split it into 100s of small sites on the fediverse it would still be part of ChatGPT. As long as it’s easy to access, they will use it. Allegedly they also use torrents for input data so even if it’s not publicly viewable it’s not safe. If/when AI data sourcing is regulated and the “transformative use” argument fails in court and if the fines are big enough for the regulation to actually work, then sure the situation described in the OP will matter. But we’ll have to see if that ever happens. I’m not holding my breath, honestly.