• 0 Posts
  • 162 Comments
Joined 2 年前
cake
Cake day: 2023年10月1日

help-circle


  • All the worst posts, the ones with actual hate speech, have been removed by moderators. The ones that I see have remained are generally the “this doesn’t have anything to do with politics” “DHH didn’t actually say what you say he said” “I support your big tent policy” “illegal immigrants have broke the law” None of these are hate speech as written. I don’t like them supporting Omarchy, and I don’t agree with what the posts in support of Framework’s stance, but I would say Framework has moderated where necessary in that post





  • I mean, having to plug something in to charge is annoying, and requires you to have a changing cable close by, or be charging the device on a schedule. This eliminates that entirely. A keyboard that always has power sounds pretty nice. I only use a wireless keyboard when travelling, but imagine this with an Xbox controller, or wireless headphones. If you find either of those applications useful, you can at least understand the wireless keyboard application’s usefulness.


  • I love the idea of smart glasses, and would happily buy them. However, it’d 1. Need to have 3rd party app support and 2. Be able to work without connecting to any tech company’s servers. I’ve gotten used to my android phone that doesn’t have google play services, and I’ll never go back to having a device that phones home without my permission. In a perfect world I’d like to have some FOSS firmware and OS to run on them, but I’d be willing to go without as long as I could disable traffic to all major tech company servers.

    Unfortunately these requirements will likely mean I won’t be getting smart glasses any time soon





  • Was on the fence for a long time, and I made the move just recently (after the pricing changes. Didn’t effect me since I was grandfathered in, but I saw it as a harbinger for worse things to come) With the creation of Wizarr, it solved my biggest problems with Jellyfin. I can just send an invite link, and it creates accounts for people on Jellyfin, Audiobookshelf, and Kavita, and lets me set up introductory guides for everything. Despite the menu UI/UX being significantly worse than Plex, playback is smoother, load times are shorter, and it can actually handle streaming to really slow internet speeds, something that Plex had a lot of trouble with.

    The only app I noticed missing was the Tizen app, but they are working on getting it approved. I only had one family member using a Tizen TV, so I just gave them an old chromecast to run off of instead.




  • Your understanding of Aphantasia is a bit off, I think the folks in the second group are just stupid. I have complete Aphantasia, and if it was explained to me, I can understand what your plans for something would be. If I was shown a CAD model, it would be extremely clear. The things I can’t do is see my wife’s face in my head, or picture the last place I left something. However, that doesn’t mean I couldn’t describe to you what my wife looked like, or that I can’t remember where I left something. Also, thinking abstractly is what people with Aphantasia are best at. I can’t remember the specifics, but they are significantly more likely to end up in a STEM field where all they do is abstract thought (myself included)

    I understand though, it’s easy for me to think about how someone who can picture things in their mind would experience things, because I can see things with my eyes. But someone who has a mind’s eye can’t really understand what it would be like to not have one. Most things that people would think are issues for me aren’t, I’ve just got different ways of remembering and thinking about things that doesn’t require needing to see them in my head.


  • I picked sentience as the culmination of the definitions of Intelligence, awareness, etc. as it ends up being circular with the definitions of those terms, and has a concrete definition that has been widely accepted to be provable by society and science.

    I would argue otherwise, as a black box that I have coded an algorithm to prove the collatz conjecture for, actually has no intelligence whatsoever, as it doesn’t do anything intelligent, it just runs through a set of steps, completely without awareness that it is even doing anything at all. It may seem intelligent to you, because you don’t understand what it does, but at the end of the day it just runs through instructions.

    I wouldn’t call the snake head responding to stimulus intelligent, as it is not using any form of thought at all to react, it’s purely mechanical. In the same way, a program that has been written that solves a problem is mechanical, it itself doesn’t solve any problem, it simply runs through, or reacts to, a set of given instructions.


  • I’ve gone down the recursive definitions rabbit hole, and while it’s way to much to chart out here, the word that all words like “intelligence” and “thought” all eventually point to is “sentience.” And while the definition of sentience also ends up being largely circular as well, we’ve at least reached a term that has been used to make modern legislation based on actual scientific study and is something widely accepted as provable, which LLMs don’t meet. One of the most common tools for determining sentience is reactionary vs complex actions.

    I disagree that an if/else statement is the most basic element of intelligence, I was just playing into your hypothetical. An if/else is purely reactionary, which doesn’t actually give any signs of intelligence at all. A dead, decapited snake head will still bite something that enters its mouth, but there wasn’t any intelligent choice behind it, it was also purely reactionary.

    I also think that a bit is information in the same way that Shakespeare’s complete works is information, just at a much smaller scale. A bit on its own means nothing, but given context, say, “a 1 means yes and a 0 means no to the question ‘are you a republican?’” With that context it actually contains a bunch of information. Same with Shakespeare. Without the context of culture, emotion, and a language to describe those things, Shakespeare’s works are equally useless as a bit without context.

    I read the study you listed, and I also disagree that “having a world model” is a good definition of awareness/conciousness, and I disagree that this paper proves that LLMs have a world view altogether. To be clear, I have taken multiple universtiy classes on ANNs (LLMs weren’t a thing when I was in university) and have taken multiple classes (put on by my employer) on LLMs, so I’m pretty familiar how they work under the hood. Whether they are trained to win, or are trained with what a valid move looks like, the data they use to store that training looks the same. It is some number of chained, weighted connections that represent what the next best token(s) (or in OthelloGPT’s case, the next valid move) might look like. This is not intelligence, this is again reactionary. OthelloGPT was trained on 20,000,000+ games of othello, all of which contained only valid moves. Of course this means that OthelloGPT will have weighted connections that will more likely lead to valid moves. Yes, it will hallucinate things when given a scenario it hasn’t seen before, and that hallucination will almost always look like a valid move, because that’s all it has been trained on. (I say almost here because it still made invalid moves, although rarely) I don’t even think this proves it has a world view, it just knows what weights lead to a valid move in Othello, given a prior list of moves. The probes they use are simply modifying the character sequence that OthelloGPT is responding to, or the weights they use to determine which token comes next. There is no concept of “board state” being remembered, as with all LLMs it simply returns the most likely next token sequence that would follow the previously given token sequences, which in this case is previous moves.

    Reactionary actions aren’t enough to understand if something is capable of thought. Like my snake example above, there are many living and not living things that will make actions without thought, just simple reactions to outside stimulus. LLMs are in this category, as they are incapable of intelligence. They only can regurgitate a response that should most likely follow the question, nothing more. There is no path for modern LLMs to have intelligence, as the way they are trained fundamentally doesn’t lead to intelligence.


  • That’s fair, but as someone who likes to contribute to FOSS projects with features that I want, I’d like every tool I use to be FOSS, so I can make them work exactly the way I want them to, while also providing something to those that don’t want to/can’t pay for a tool like this, or just don’t want to have the inevitablity of having spent hundreds of hours getting used to a tool, only for the owning company to make it unusable for you.

    In FOSS projects, if a project starts to go a route you don’t like, you can ignore all future updates and still get the exact experience you wanted.


  • I think this is what Louis was going for. He doesn’t want to ask for no more companies, just companies that make a product (doesn’t even need to be a good one) where its sole purpose is to try (doesn’t even need to succeed) and be useful to the consumer.

    I think he hit his mark pretty well for the symbol, but whether or not I agree with his view on things is a different story entirely.