• 2 Posts
  • 352 Comments
Joined 2 years ago
cake
Cake day: July 4th, 2023

help-circle
  • pixxelkick@lemmy.worldtoFuck AI@lemmy.worldEfficency
    link
    fedilink
    arrow-up
    2
    arrow-down
    1
    ·
    3 hours ago

    If they are mandated, that’s just as bad I agree.

    At my company we have tonnes of in house Lunch and Learns (on paid time, non mandatory) that are effectively “I found this super useful thing and want others to know about it”

    And I’ll join these things, and see (person), who is on my team, in it too. Later I’ll hat with them about it, or at least try, and they’ll have zero clue wtf I’m talking about.

    And it becomes obvious they just joined the meeting to give the illusion of caring, they prolly were afk the whole time. And I suspect this cuz they often do the same for our “in team” mandatory important meetings discussing critical stuff on the project.


  • pixxelkick@lemmy.worldtoFuck AI@lemmy.worldEfficency
    link
    fedilink
    arrow-up
    12
    arrow-down
    4
    ·
    9 hours ago

    Sorta just sounds like you can probably fire a few employees who don’t give a fuck.

    From experience, a lot of companies tend to be propped up by like 10% of their developers doing 90% of the work, maybe 50% of developers doing the last 10%, and then like 40% of developers either doing fuck all or actively harming the codebases and making more work for the other 60%.

    And more often than not, these people are the ones sending stuff like “AI Note Takers” merely to give the illusion of existing.

    In reality you have like three devs who actually care and do most of the work having a discussion and like 10 to 30 AFK participants with their cameras off who probably arent even listening or care.

    And the thing is, it shows later. The same devs have zero clue wtf is going on, they have zero clue how to do their job, and they constantly get their code sent back or they plague up some codebase that doesnt have reviewers to catch them.

    The AI note takers are just the new version of people showing up to meetings with their camera off and never speaking a word.

    Except now they burn orders of magitude more power to do it, which is awful.






  • I still today use, and hear familiar millennial use, “lmao”

    Usually ironically with a twinge of negativity. Pronounced “luh-mow”

    IE “Did you here the US elected Trump again?” “lmao

    Usually only used on its own, it suddenly sounds weird if you put it in a sentence but purely just used as a response to show ironic dissatisfaction quickly.

    Pretty much the verbal equivalent of an eyeroll.



  • The principle that one shot prompts are pretty critical for logic puzzles is well established at this point, has been for well over a year now.

    Like I said, this is like someone dragging their lawmower out onto the lawn without starting it, and then proclaiming lawnmowers suck cuz their lawn didnt get cut.

    You have to start the thing for it to work, mate, lol.

    I get that itd be nice if you didnt have to, but thats not how an LLM works, LLMs are predictive text algorithms which means they need something to start predicting off of as a starting point, thats like their whole shtick.

    If you dont give them a solid starting point to work from, you are literally just rolling the dice on if it’ll do what you want or not, because Zero shot prompting is going full “jesus take the wheel” mode on the algorithm.

    It’s annoying that marketing and consumers have created this very wrong perception about “what” an LLM is.

    When you asks someone “knock knock” and they respond with “who’s there?” thats all an LLM is doing, it’s just predicting what text outta come up statistically next.

    If you dont establish a precedent, you’re going full RNGjesus on praying it choose the correct direction

    And more important, and I CANNOT stress this enough…

    Once an LLM gets the answer wrong, if you keep chasing that thread, it will continue to keep behaving wrong

    Because you’ve established the pattern now in that thread that “User B is an idiot”, and told it its wrong, and that means its gonna now keep generating the content of what a wrong/stupid responder would sound like

    Consider this thought experiment, if you will:

    If I hand a person the incomplete text of a play where 2 characters are talking to each other, A and B, and the entire text is B saying dumb shit and A correcting B, and I ask that person to add some more content to the end of what I’ve got so far, “finish this” so to say, do you think they’re gonna suddenly pivot to B no longer being an idiot?

    Or… do you think it’s more likely they’ll keep the pattern going I have established, and continue to make B sound stupid for A to correct them on?

    Probably the latter, right?

    Thats all an LLM is, so if you already have 3 instances of you telling the LLM “No thats wrong you are dumb”, guess what?

    You have literally conditioned it now to get even dumber, so its gonna respond with even more wrong responses, because you’re chasing that thread.


  • LLMs are not self aware, any random nonsense they generate about themselves is not remotely reliable as a source of truth.

    You can’t ask it for info about who/what it is and take that at face value, it’s just as randomly generated as any other output.

    In terms of reasoning, you’ll wanna understand zero vs one vs many shot prompting, complex logic puzzles still typically require at minimum one shot prompts, but if complex enough may require a multi shot prompt to get it going.

    Treat an LLM a lot like a lawn mower gas engine, if you just take it out on the yard and drag it around without actually starting the engine up, it’s not going to be surprising that it didnt cut any grass.

    For all intents and purpose for a puzzle like this, you likely need to first provide an example of solving a different puzzle of the same “type”, demonstrating the steps to achieve a solution for that puzzle.

    Then you provide the actual puzzle to the LLM, and it’s success rate will skyrocket.

    The pre-load puzzle can be a simpler one, its mostly just about demonstrating the format and steps one “how” you do this “type” of puzzle, that can usually be good enough to get the ball rolling to get the LLM to start generating sane output.

    This is called “One Shot” prompting.

    However, for more complex stuff you may need to pre-prompt with 2 to 4 examples, ideally focusing on keeping the syntax very tight and small so the context windows stays small (using stuff like icons and shorthands to shorten up phrases and turn an entire sentence into 2-3 words can help a lot)

    With multiple preloaded prompts this can even further boost the LLMs reliability of output. We call this “Multi shot” prompting.

    Its very well known that even the best trained LLMs still struggle a lot with logic puzzles AND zero prompt shots at it

    Only if its a well known logic puzzle that is already well solved, in which case instead of actually “solving” it, the llm will simply just regurgitate the answer verbatim someone wrote out on some random forum or whatever it was trained on.

    But for a unique new logic puzzle, it becomes necessary to at minimum one shot prompt usually.




  • Anytime an article posts shit like this but neglects to include the full context, it reminds me how bad journalism is today if you can even call it that

    If I try, not even that hard, I can get gpt to state Hitler was a cool guy and was doing the right thing.

    ChatGPT isn’t anything in specific other than a token predictor, you can literally make it say anything you want if you know how, it’s not hard.

    So if you wrote an article about how “gpt said this” or “gpt said that” you better include the full context or I’ll assume you are 100% bullshit


  • Sorts? Not tabs in the way you’d expect but it’s default ones can be sufficient

    Honestly though once you get pretty good with hotkeys you stop using tabs, for all intents and purposes harpoon is tabs, but better, and without the UI. You just mentally usually pick harpoon keys that make sense to save jump points to, like I’ll harpoon FooController.cs to c and FooService.cs to s and FooEntity.cs to e and so one

    And the I jump around with those keys. Usually when working I only need tops 5 harpoon or so for a chunk of work.


  • I still boot in sub 1s so I don’t know what you mean by “bloated”

    Lazy allows you to boot ultra fast by loading stuff in the background later, so “bloat” doesn’t matter

    nvim-dap does literally nothing until you trigger it, so it’s only impact on my startup is like 3 hotkey registrations :p

    It’s a perfectly fine debugger, works great. The fact I can telescope search to fzf my stack trace actually kind of makes it superior? Like you can’t do that sorta stuff in any other IDE I know of

    Also all my navigation stuff like telescope/harpoon/etc still apply when debugging, so I can literally debug faster jumping around the stack trace with hotkeys.

    Neovim doesn’t get any less awesome when it comes to debugging, a lot of it’s power still applies just as much haha


  • A lot of them are dependencies of other plugins.

    Stuff like icons support, and every little feature. Neovim is extremely minimalist to start, so you need plugins just to get something as simple as a scrollbar lol

    Things like git status of files and file lines, all your LSPs, syntax highlighting (for each language you work with), file explorer, you name it, there’s a lot.

    But what’s nice about nvim is for any of these given features, there’s numerous options to pick from. Theres probably a dozen options to choose from for what kind of scrollbar you want in your editor, as an example.

    So you end up with a huge amount of plugins in the end, for all your custom stuff you have configured.

    You have to setup yourself (though theres a lot of very solid copy pasteable recipes for each feature):

    • Scrollbar
    • Tabs(if you want em)
    • bookmarking
    • every LSP
    • treesitter
    • navigation (possibly multiple of them, I use both a file tree, telescope, and harpoon)
    • file history stuff
    • git integrations, including integrating it with the numerous other plugins you use (many of them can integrate with git for stuff like status icons)
    • Code commenting/uncommenting
    • Code comment tags (IE TODO/BUG/HACK/etc)
    • your package manager is also a package (I like lazy for wicked fast open speeds, neovim opens in under 1s for me)
    • hotkey management (I like to use which-key)
    • prose plugins (lots of great options here too, I use nvim for more than just coding!)
    • neorg, so I can use nvim for taking notes, scheduling stuff, etc too
    • debugger via nvim-dap
    • debugger UI via nvim-dap-ui
    • lualine, which is a popular statusline plugin people like to have at the bottom of their IDE for general file info
    • new-file-template which lets me create templates for new files by extension (IE when I make a .cs file and start editting it, I can pick from numerous templates I’ve made to start from, same for .ts, .lua, etc etc)
    • git conflict, which can detect and work with detected git merge conflict sections in any type of file and give me hotkeys to do stuff like pick A / B / Both / Neither, that sorta stuff

    The list goes on and on haha



  • It turns out that to plan their ill-fated expedition, the hikers heedlessly followed the advice given to them by Google Maps and the AI chatbot ChatGPT.

    Okay?

    Proceeds to not elaborate even remotely further on what ChatGPT told them

    Oh yeah, super high quality journalism here folks. This entire articles premise boils down to “They asked something (unknown what) of ChatGPT related to this hike, and they got something (unknown what) back, but we’re gonna go ahead and mention it and write a whole article about it”

    For all I know, they just asked gippity for tips on good ideas for trail mix, who knows! We sure never will because this entire article doesnt actually bother to tell us

    FFS, can we please downvote this low quality garbage pretending to be journalism? Give me facts people



  • Just one example, we have many population groups that live in areas where groundwater is used for drinking that also live near a firefighting training base/station that has released huge amounts of PFOAs into the aquifers

    Crazy as it sounds but living next to a firefighting training station still biases you towards certain living conditions

    Scientists are perfectly fine with using lab, mouse, and emprical cross-sectional studies - that’s all valid scientific evidence.

    Yeah obviously, but that’s still evidence, not proof, I used the word proce there intentionally.

    I’m not suggesting they actually do it, I’m calling out people that take a bunch of very good evidence and then treat it like it’s proof. That’s all

    And I’ve been using the words proof/prove this whole time.

    There’s lots of evidence, but there’s not enough yet to do more than draw an interesting corollation.

    But there’s definitely no proof and click bait videos that word it as such are trash

    Thats what I am addressing, numties taking this evidence and running off with it to spread disinformation framing it as proof via their choice of words.

    Jesus. Fucking. Christ. People need to learn to read.

    I’m not sitting here saying PFAS dont cause issues

    I’m sitting here calling out clickbait youtubers who frame evidence as proof via poor wording to incite people

    God fucking damnit I hate how much people on the internet are so focused on bring right they won’t even read what you write properly just so they can find things to pick a fight over. Fuck off lol