• 0 Posts
  • 53 Comments
Joined 1 year ago
cake
Cake day: July 6th, 2023

help-circle

  • It’s a very hard game. I really got into it playing with the Noita Together mod, and the Spell Labs mod when I was playing solo, to really figure out the game. Then once I felt I had a good grasp, I beat it, did the sun quest, eventually beat 33 orb Kolmi… Lost all my progress and had to do it again.

    If you can’t tell, I love Noita, but I fell in love with the wand building first. Spell Labs has excellent tutorials on improving your wand builds too. But I now have modded the game, so I’m not a casual player of it either.


  • I’ve seen a few, but it’s still kind of controversial. That being said, there is a time and a place for agile where it works, but also there is a team composition and a style of agile which works and that style tends to piss off micromanaging middle managers, so it rarely is allowed.

    I had an article saved in my work slack before I left that company (for health reasons), but a currently popular one seems to be this one: https://johnfarrier.com/agile-failure-what-drives-268-higher-failure-rates/

    My take is based on years of interaction with companies and friends in other companies. The biggest problem isn’t necessarily Agile, but instead that agile is not intended for long term projects. Agile is fantastic in short turnaround interactions such as web dev, and because these short turnaround places have such easily visible results, managers take them to be gospel. Thus comes Corporate Agile: https://web.archive.org/web/20240524230754/https://bits.danielrothmann.com/corporate-agile Link is from the Internet archive because I can’t find his new site if he moved.

    Long story short, corporate agile is the agile the bosses want, as it allows them to be constantly involved with more and more “agile” meetings. You know. Meetings. The antithesis of Agile. The place productivity goes to die. I had to remind our bosses that Agile dictated that stand ups included the developers and the scrum master ONLY multiple times and pointed them to the agile training they gave me. Didn’t matter. They’re the boss. This is a pretty common breakdown in Agile. So, that turned daily standup into daily meeting, since the quick status updates now had to be broken down for the boss. Every. Single. Day.

    Agile at its most basic is intended to reduce meetings to once a week so the rest of the time can be spent developing. Every company I know starts including devs in at least 300% more meetings (even junior devs) after switching to Agile for at least 6 months. And on average, it takes half an hour for a programmer to return to the level of productivity they hit before any interruption. This is generally due to the limitations of working memory. (Many research papers on this if you want.)

    But to get back to the original point. Because agile concentrates on short immediately tangible and verifiable benefits, any progress that takes longer than a sprint isn’t allowed. (It actually is, with proper implementation, as Agile is supposed to be edited on a team by team basis to make things work, but companies want everyone on exactly the same page.) Guess what doesn’t have immediately tangible and verifiable benefits? That’s right, research. Guess what it’s still in a research phase? Aside from basically anything that isn’t in market yet, self driving technology is very much research driven. Lots of trial, error, and long development cycles. Longer than a sprint for sure. And anyone who says self driving is in market should try an exercise if finding one level 5 self driving car that hasn’t been recalled due to false marketing or safety concerns. The technology isn’t there yet. It could be getting there, but profits are getting in the way of progress.


  • Realistically. Trains will revolutionize road transport of goods and people if the train industry properly maintained their rails, operated above board (unlike the one that had the chemical spill in Ohio and other issues), and expands a bit. The largest expense in good transport is long haul and no one wants to drive long haul. Last mile will probably need trucks and drivers for at least 3 to 5 more decades. And taxi services have similar challenges to last mile delivery. Personal self driving systems need even more consideration than taxi services, and will likely take five to ten years after taxi services become recognized as safe.


  • In my (in the industry) experience: Agile killed safe development by pushing superficial internal deadlines that look good instead of are good. Safety requirements therefore are never met, but people keep looking like they’re approaching at least one, but end up sacrificing other things that no one is concentrating on, causing more set backs than improvements. Self driving will not be legally commercialized until either someone lobbies bad development onto the roads, or capitalism realizes that quarter profit isn’t as important as ten year profit and Agile finally burns in a god damn fire.


  • Not necessarily the case, but if it’s affecting your life so strongly, you might want to get checked by a medical professional.

    Long COVID can destroy your life. Depression can destroy your life. Iron deficiency can ruin your life. A lot of things you might just think is just being tired may actually have a cause. Especially if simple fixes like “touch grass” style clichés do nothing for you.

    It’s not always the answer, but it’s good to rule out in that case.


  • I was told in 2009 “Why optimize? Hardware upgrades will make your efforts obsolete anyway.” So… I devoted my time to optimization, because fuck that. I ended up doing algorithm optimization in my first full time job, and loved… That part of the job at least.

    Indie games and co-op games are my jam. I feel for all of this comment.




  • Poik@pawb.socialtoScience Memes@mander.xyzSardonic Grin
    link
    fedilink
    English
    arrow-up
    8
    ·
    2 months ago

    That’s LLM bull. The model already knows hangman; it’s in the training data. It can introduce variations on the data, especially in response to your stimuli, but it doesn’t reinvent that way. If you want to see how it can go astray ask it about stuff you know very well, and watch how it’s responses devolve. Better yet, gaslight it. It’s very easy to convince LLMs that they’re wrong because they’re usually trained for yes-manning and non confrontation.

    Now don’t get me wrong, LLMs are wicked neat, but they don’t come up with new ideas, but they can be pushed towards new concepts, even when they don’t grasp them. They’re really good at sounding sure of themselves, and can easily get people to “learn” new “facts” from them, even when completely wrong. Always look up their sources, (which Bard (Google’s) can natively get for you in its UI) but enjoy their new ideas for the sake of inspiration. They’re neat toys, which can be used to provide natural language interfaces to expert systems. They aren’t expert systems.

    But also, and more importantly, that’s not zero-shot learning. Neat little anecdote from a conversation with them though. Which model are you using?


  • Poik@pawb.socialtoScience Memes@mander.xyzSardonic Grin
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 months ago

    No. AI and, what you’re more likely to be referring to, machine learning has had applications for decades. Basic work was used back into the '60s, mostly for quick things, and 1D data analysis was useful long before images (voice and stuff like biometrics). But there are many more types of AI. Bayesian networks (still in the learned category) were huge breakthroughs and still see a lot of use today. Decision trees, Markov chains, and first order logic are the most common video games AI and usually rely on expert tuning rather than learned results.

    AI is a huge field that’s been around longer than you expected, and permeates a lot of tech. Image stuff is just the hot application since it’s deep learning based buff that started around 2009 with a bunch of papers that helped get actual beneficial learning in deeper models (I always thought it started roughly with Deep Boltzmann Machines, but there’s a lot of work in that era that chipped away at the problem). The real revolution was general purpose GPU programming getting to a state where these breakthroughs weren’t just theoretical.

    Before that, we already used a lot of computer vision, and other techniques, learned and unlearned, for a lot of applications. Most of them would probably bore you, but there are a lot of safety critical anomaly detectors.


  • Poik@pawb.socialtoScience Memes@mander.xyzSardonic Grin
    link
    fedilink
    English
    arrow-up
    7
    ·
    2 months ago

    This actually is a symptom from the sort of “beneficial” overfit in Deep Learning. As someone whose research is in low data, long tails, and few shot learning, there’s a few things that smaller networks did better in generalization, and one thing they particularly did better (without explicit training for it) is gauging uncertainty. This uncertainty is sometimes referred to as calibration. Calibrating deep networks can yield decent probabilities that can be used to show uncertainty.

    There are other tricks for this. My favorite strategies prep the network for learning new things. Large margin training and the like are a good thing to look into. Having space in the output semantic space (the layer immediately before the output or earlier for encoder decoder style networks) allows for larger regions for distinct unknown values to be separated from the known ones, which helps inherently calibrate the network.


  • Which end? The main story is just a narrative device, in fact you shouldn’t really obey the narrator at all. Calling any end “The End” doesn’t make sense in the context of the game, really. Unless you just broke out of the mind control facility three times then called it quits? That end is supposed to be non enticing so that you try literally anything else before putting it down. I think the going insane end sticks with me the most. Although the game dev commentary in the recent release is fun.




  • A lot of drugs cause permanent problems when abused, and are still prescribed. Testing is needed to figure out if there’s safe dosing and whatnot. Worse, safe dosage for one person may be incredibly unsafe for another, just like with depression meds which can permanently cause mental issues (in addition to depression) at normally prescribed and “safe” dosages. This is why honest discussions and ongoing check ins with your doctor is vital in any prescription change. Hell, penicillin almost killed my mom, and that’s relatively safe unless you have an allergic reaction.

    Definitely hard to test with drugs that offer non medical and very obvious side effects. Hopefully there is an interesting breakthrough in understanding mechanics so we can make safe PTSD helping meds, but something so drastically painful to the person having it may not have a safe cure because the systems that go haywire are so ingrained in the preservation systems of our brains.

    Brains are weird. Any tampering is possibly dangerous.


  • Poik@pawb.socialtoScience Memes@mander.xyz✨️ Finish him. ✨️
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    4
    ·
    4 months ago

    This is why the machine learning community will go through ArXiv for pretty much everything. We value open and honest communication and abhor knowledge being locked down. This is why he views things this way. Because he’s involved in a community that values real science.

    ArXiv is free and all modern science should be open. There were reasons for publications in the past, since knowledge dissemination was hard, and they facilitated it. Now the publications just gatekeep.