I saw another article today saying how companies are laying off tech workers because AI can do the same job. But no concrete examples… again. I figure they are laying people off so they can pay to chase the AI dream. Just mortgaging tomorrow to pay for today’s stock price increase. Am I wrong?

  • nandeEbisu@lemmy.world
    link
    fedilink
    arrow-up
    5
    ·
    6 hours ago

    No, it’s basically filling the role of an auto complete and search function for code based. We’ve had this for a while and it generally works better than a lot of stuff we’ve had in the past, but it’s certainly not replacing anyone any time soon.

  • frog_brawler@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    7 hours ago

    I don’t know Python, but I know bash and powershell.

    One of the AI things completely reformatted my bash script into a python the other day (that was the intended end result), so that was somewhat impressive.

  • tecnohippie@slrpnk.net
    link
    fedilink
    English
    arrow-up
    26
    ·
    edit-2
    1 day ago

    If you want an example, my last job in telecom was investing hard in automation and while it was doing a poor job at the beginning, it started to do better and better, and while humans were needed, we had to do less work, of course that meant that when someone left the job, my employers didn’t look for replacements.

    To be honest I don’t see the AI doing the job of tech workers to a point we should worry now… But in 20 years? That’s another story. And in 20 years if I get fired probably no one will want to hire me, so I’m already working on a plan B.

    • Devanismyname@lemmy.ca
      link
      fedilink
      English
      arrow-up
      7
      ·
      20 hours ago

      20 years? The way they talk it’s gonna happen in 20 weeks. Obviously, they exaggerate, but it does seem we are on the edge of something big.

      • Valmond@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        10 hours ago

        Yes, IMO tech is moving towards getting easier.

        I’m not saying it is, but I bet that in a couple of years you can spin up a corporate website-management-platform on a 50€ raspberry instead of having a whole IT department managing emails, webservers and so on.

        Things are getting easier and easier IMO.

      • tecnohippie@slrpnk.net
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        13 hours ago

        Yeah when I said 20 years I wanted to express something that looks distant, I think that we will see a big change sooner. To be honest the plan B I’m working for, I’m trying to make it happen asap, hopefully next year or in two years, I may be overreacting but personally I’m not going to wait for the drama to really begin to take actions.

  • vermyndax@lemmy.world
    link
    fedilink
    arrow-up
    37
    ·
    1 day ago

    I’m seeing layoffs of US workers, who are then being replaced by Indian, South American and Ireland nationals… not AI. But they’re calling it AI.

      • sleepmode@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        15 hours ago

        I heard Ireland hiring is also for tax reasons. But I’m seeing them move to South Americans more and more. Uruguay especially. I know Big Blue hired thousands there after doing RTO in the US.

        • HubertManne@moist.catsweat.com
          link
          fedilink
          arrow-up
          1
          ·
          6 hours ago

          I think that is it is the eu sorta put a stop to them being a massive tax loophole to the area. Now the area is about as expensive as the rest but does not have the big tax undercut like it used to.

  • ericatty@infosec.pub
    link
    fedilink
    arrow-up
    23
    ·
    1 day ago

    What I’m reading out of this… there’s going to be a massive shortage of senior programmers in 20(?) years. If juniors aren’t being let go/not hired and AI is doing junior work…

    AI will have to massively improve or else it’s going to be interesting when companies are trying to hold on to retirement age people and train up replacement seniors to verify the AI delivers proper code.

  • Mojave@lemmy.world
    link
    fedilink
    arrow-up
    40
    arrow-down
    1
    ·
    1 day ago

    Yeah kinda, my coworkers talk to ChatGPT like it actually knows stuff and use it to fix their broken terraform code.

    It takes them a week or longer to get simple tickets done like this. One dude asked for my help last week, we actually LOOKED at the error codes and fixed his shit in about 15 minutes. Got his clusters up within an hour. Normally a week long ticket – crunched out in 60 minutes by hand.

    It feels ridiculous because it’s primarily senior tech bro engineer types who fumble their work with this awful tool.

    • okamiueru@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      10 hours ago

      I have never seen a clearer divide and correlation between the value I observe being produced, and those that don’t understand the limitations and value of LLMs.

      It’s exhausting, because, yes, LLMs are extremely valuables, but only as so far as to solve the problem of “possible suggestions”, and never as “answers and facts”. For some reason, and I suppose it’s the same as for why bullshit is a thing, people conflate the two. And, not just any “people” either, but IT developers and IT product managers, all the way up. The ones that have every reason to know better, are the ones that seem to be utterly clueless as to what problems it solves well, what is irresponsible for it do be used for, correctly evaluating ethics, privacy and security, etc. Sometimes I feel like I’m in a mad house or just haven’t found the same hallucinogenic that everyone else is on.

  • Dr. Wesker@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    103
    arrow-down
    2
    ·
    edit-2
    2 days ago

    Do the job? No. Noticeably increase productivity, and reduce time spent on menial tasks? Yes.

    I suspect the layoffs are partly motivated by the expectation that remaining workers will be able to handle a larger workload with the help of AI.

    US companies in particular are also heavily outsourcing jobs overseas, for cheaper. They just don’t like to be transparent about that aspect, so the AI excuse takes the focal point.

    • makingStuffForFun@lemmy.ml
      link
      fedilink
      arrow-up
      31
      arrow-down
      4
      ·
      2 days ago

      I agree completely.

      We have an AI bot that scans the support tickets that come in for our business.

      It has a pretty low success rate of maybe 10% or 20% accuracy in helping with the answer.

      It puts its answer into the support ticket it does not reply to the customer directly. That would be a disaster.

      But 10% or so of our workload has now been shouldered off to the AI, which means our existing team can be more efficient by approximately 10%.

      It’s been relatively helpful in training new employees also. They can read what the AI suggests and see if it is correct or not. And in learning if it is correct or not, they are learning our systems.

      • paequ2@lemmy.today
        link
        fedilink
        English
        arrow-up
        15
        ·
        2 days ago

        They can read what the AI suggests and see if it is correct or not.

        What’s this process look like? Or are there any rails that prevent the new employee from blinding trusting what the AI is suggesting?

        • makingStuffForFun@lemmy.ml
          link
          fedilink
          arrow-up
          14
          ·
          2 days ago

          Well, as they are new and they are in training, the new employee has to show their response to their team members before they reply.

          If they are going to reply incorrectly we stop them and show them what’s wrong with it.

          We are quite small and it’s nice to just to help us with this process.

          The bot is trained on our actual knowledge base data. Basic queries, it really does a great job, but when it’s something more system based or that is probably user error, then it can get a bit fuzzy.

      • chaosCruiser
        link
        fedilink
        English
        arrow-up
        4
        ·
        2 days ago

        That’s also true when processing bills. The AI can give you suggestions, which often require some tweaking. However, some times the proposed numbers are spot on, which is nice. If you measure the productivity of a particular step in a long process, I would estimate that AI can give it a pretty good boost. However, that’s just one step, so by the end of the week, the actual time savings are really marginal. Well, better than nothing, I guess.

    • SwizzleStick@lemmy.zip
      link
      fedilink
      English
      arrow-up
      15
      arrow-down
      1
      ·
      2 days ago

      reduce time spent on menial tasks

      Absolutely. It’s at the level where it can throw basic shit together without too much trouble, providing there is a competent human in the workflow to tune inputs and sanitise outputs.

      • Dr. Wesker@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        10
        ·
        edit-2
        2 days ago

        I use it to write my PR descriptions, generate class and method docstrings, notate code I’m trying to grok or translate, etc and so forth. I don’t even use it to actually generate code, and it still saves me likely a couple hours a week.

        • SwizzleStick@lemmy.zip
          link
          fedilink
          English
          arrow-up
          9
          ·
          2 days ago

          I haven’t thought about using it to annotate my garbage rather than generating its own. Nice idea :)

        • DontTakeMySky@lemmy.world
          link
          fedilink
          arrow-up
          6
          arrow-down
          1
          ·
          1 day ago

          I use it to (semi) automate bit repetitive tasks. Like adding a bulk set of getters, generating string maps to my types, adding handlers for each enum type, etc. Basic stuff, but nice to save keystrokes (it’s all auto complete).

          Anything more complex though and I spend more time debugging than I saved. It’s hallucinated believable API calls way too often and wasted too much of my time.

          • hoshikarakitaridia@lemmy.world
            link
            fedilink
            arrow-up
            2
            ·
            1 day ago

            Yeah I can see the API call shenanigans. I’m using super maven for code and it’s pretty good tbh, it gets me 30% of the way or something. But API calls is a no-go, it almost never gets it right because I’m pretty sure it’s very hard for AI to learn the differences in API endpoints.

  • bokherif@lemmy.world
    link
    fedilink
    arrow-up
    19
    ·
    1 day ago

    AI is just another reason for layoffs for companies that are underperforming. It’s more of a buzzword to sell the company to investors. I haven’t seen people actually use AI anywhere in my large ass corp yet.

    • LifeInMultipleChoice@lemmy.world
      link
      fedilink
      arrow-up
      6
      ·
      1 day ago

      I called Roku support for a TV that wasn’t working and 90% of it was a LLM.

      All basic troubleshooting including factory resetting the device and such seemed like it was covered and then they would forward you onto the manufacturer if it wasn’t repaired because at that point they assume it is likely a hardware issue (backlight or LCD) and they want to get you to someone who can confirm and sell you a replacement I’m sure.

  • curiousaur@reddthat.com
    link
    fedilink
    arrow-up
    20
    arrow-down
    3
    ·
    1 day ago

    Yeah, I use it daily for coding. It’s a force multiplier. It basically makes me 2 - 3x more effective. My company laid off all our junior engineers and is not hiring juniors any longer.

    • some_guy@lemmy.sdf.org
      link
      fedilink
      arrow-up
      19
      ·
      edit-2
      1 day ago

      That certainly won’t come back to haunt them in 10 years. /s

      Very shortsighted, but that’s the market we live in. The people making those decisions know they’ll exit before this catches up with the company and leave someone else holding the bag.

    • VitoRobles@lemmy.today
      link
      fedilink
      English
      arrow-up
      8
      ·
      edit-2
      24 hours ago

      Funny you say this. I’m watching my local coding community say things like “We used to apply to 100+ jobs and get an interview. Now it’s like 300+ jobs.”

      It’s a serious change

    • DelightfullyDivisive@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      24 hours ago

      I’d say more like 20% more productive for most developers. Maybe it suits your coding style better than most?

      Most of the time spent developing software isn’t writing code, but understanding the problem you’re trying to solve and translating that into an algorithm. I see more utility in generating tests, since a lot of developers don’t have good testing skills.

      • orgrinrt@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        15 hours ago

        That 20% is just way too optimistic for anything more serious so as it would normally prompt hiring of software engineers.

        If the project currently requires human developers as paid employees, it will continue to require that. So in introducing today’s ai, you either pay for the employees and the language model expenses, or you pay reduced employee expenses and the language model expenses, and then figure out a way to fund a complete, unavoidable refactor/rewrite down the line and how to adapt the business model back to sustaining employing the original amount of engineers on top of that lump sum.

        If the project never was going to employ anyone, then yeah, using a language model can be more productive. It’s never going to require the amount of stability and cohesiveness a serious application doing serious things would require.

        Otherwise, it’s just going to add work and require effort in an amount of multiples that scales with the complexity and seriousness of the application.

        And while it does this, it consumes ridiculous amounts of more energy and resources than a human person would. Especially those that are not sustainable, that humans do not generally require in such immense amounts.

        It’s going to be a net negative for a good while. If we ever survive the burning of our resources with these current models, maybe we get to something actually serious and usable, but I doubt those two can ever work together.

      • curiousaur@reddthat.com
        link
        fedilink
        arrow-up
        1
        arrow-down
        1
        ·
        24 hours ago

        I don’t know what tools you’re using, but that translating the problem into an algorithm is exactly what the AI is very good at.

        I basically only architect stuff now, then fine tune the AI prompts and results.

  • FartsWithAnAccent@fedia.io
    link
    fedilink
    arrow-up
    31
    ·
    1 day ago

    No, not even close. It’s far too unreliable, without someone who knows what they’re doing to vet the questionable result, AI is a disaster waiting to happen. Never mind it cannot go fix a computer or server or any physical issue.

    • dan1101@lemm.ee
      link
      fedilink
      arrow-up
      12
      ·
      1 day ago

      Replacing workers with AI is a dream of management, but it’s not really AI it’s just a general search engine with a fairly impressive natural language interface.

  • HubertManne@moist.catsweat.com
    link
    fedilink
    arrow-up
    8
    ·
    1 day ago

    It has potential to increase quality but not take over the job. So coders already had various addons that can help complete a line and suggest variables and such. I found the auto commenting great. Not that it did a great job but its one of those things were without it im not doing enough commenting but when it auto comments Im inclined to correct it. I suppose at some point in the future the tech people could be writing better tasks and user stories and then commenting to have ai update the code output or just going in and correcting it. Maybe then comments would indicate ai code vs user intervened code or such. Utlimately though until it can plan the code its only going to be a useful tool and can’t really take over. Ill tell ya if ai could write code from an initiative the csuite wrote then we are at the singularity.

    • orgrinrt@lemmy.world
      link
      fedilink
      arrow-up
      5
      ·
      15 hours ago

      It also has potential to decrease the quality.

      I think the main pivot point is whether it replaces human engineers or complements them.

      I’ve seen people with no software engineering experience or education, or even no programming experience at all in any form, create working apps with AI.

      I’ve also seen such code in multiple instances and have to wonder how any of it makes sense at all to anyone. There are no best practices seen, just a confusing set of barely working disconnected snippets of code that very rudimentarily work together to do what the creator wanted in a very approximate, inefficient and unpredictable way, while also lacking any benefits of such disconnect such as encapsulation or any real domain-separated design.

      Extending and maintaining that app? Absolutely not possible without either a massive refactoring resembling a complete rewrite, or, you know, just a honest rewrite.

      The problem is, someone who doesn’t know what they are doing, doesn’t know what to ask the language model to do. And the model is happy to just provide what is asked of it.

      Even when provided proper, informed prompts, the disability to use the entire codebase as the context causes a lot of manual intervention and requires bespoke design in the code base to work with that.

      It absolutely takes many more times more work to make it all work for ML in a proper, actually maintainable and workable way, and even then requires constant intervention, to the point that you end up doing the work you’d do manually, but in at least triple the amount of effort.

      It can enhance some aspects, of which one worth a special mention is actually the commenting and automatic, basic documentation skeletons to work up from, but it certainly will not, for some while, replace anyone. Not unless the app really only has to work, maybe, sometimes, and stay as-is without any augmentations, be they maintenance or extending or whatever.

      But yeah, it sort of makes sense. It’s a language model. Not a logical model or one that is capable of understanding given context, and being able to get even close to enough context, and maintain or even properly understand the architecture it works with.

      It can mimic code, as it is a language model after all. It can get the syntax right, sure, and sometimes, in small applications, it works well enough. It can be useful to those who would not employ engineers in the first place, and it can be amazing for those cases, really, good for them! But anything that requires understanding of anything? Yeah, that’s going to do nothing other than confuse and trip everyone in the long run, requiring multiples of work to do in comparison to just doing it with actual people who can actually understand shit and retain tens of years worth of accumulated extremely complex and large context and experience applying it in practice.

      But, again, for documentation, I think it is a perfect fit. It needs not any deeper context, and it can put into general language what it sees as code, and sometimes it even gets it right and requires minimal input from people.

      So, it can increase quality in some sense, but we have to be really conscious of what that sense is, and how limited its usefulness ultimately is.

      Maybe in due time, we’ll get there. But we are not even close to anything groundbreaking yet in this realm.

      I don’t think we’ll ever get there, because we are very likely going to overextend our usage of natural resources and burn out the planet before we get there. Unless a miracle happens, such as stable fusion energy or something as yet inconceivable.

      • HubertManne@moist.catsweat.com
        link
        fedilink
        arrow-up
        1
        ·
        6 hours ago

        I find that is a big difference in the llm’s some you can challenge the answer and sorta get an update where they take into account what you said. closer to a conversation and thus collaboration. Others though seem to treat it like a new query and don’t take into whats been said and such. or just don’t do so well. My thought is it could be a replacement for paired programming but not many places were using that anyway.

  • remon@ani.social
    link
    fedilink
    arrow-up
    56
    ·
    edit-2
    2 days ago

    Nope. In fact, it’s actually generating more work for me, because managers are commiting their shitty generated code and then we have to debug and refactor it for productiuon. It would actually save time if they just made a ticket and let us write it traditionally.

    But as long as they’re wasting their own time, I’m not complaining.

      • remon@ani.social
        link
        fedilink
        arrow-up
        26
        arrow-down
        1
        ·
        edit-2
        2 days ago

        I actually quite enjoyed it. He called me on the weekend the other day because he couldn’t get his code to run (he tried for multiple hours). Took me about ten seconds to tell him he was missing two brackets, didn’t even need to share his screen, it was such an obvious amateur mistake.

        Anyway, wrote down 15 minutes (smallest unit) of weekend overtime for a 1 minute call.

  • tal@lemmy.today
    link
    fedilink
    English
    arrow-up
    37
    ·
    edit-2
    1 day ago

    “Tech workers” is pretty broad.

    Tech Support

    There are support chatbots that exist today that act as a support feature for people who want to ask English-language questions rather than search for answers. Those were around even before LLMs, could work on even simpler principles. Having tier-1 support workers work off a flowchart is a thing, and you can definitely make a computer do that even without any learning capability at all. So they definitely can fill some amount of role. I don’t know how far that will go, though. I think that there are probably going to be fundamental problems with novel or customer-specific issues, because a model just won’t have been trained on it. I think that it’s going to have a hard time synthesizing an answer from answers to multiple unrelated problems that it might have in its training corpus. So I’d say, yeah, to some degree, and we’ve successfully used expert systems and other forms of machine learning in the past to automate some basic stuff here. I don’t think that this is going to be able to do the field as a whole.

    Writing software

    Can existing LLM systems write software? No. I don’t think that they are an effective tool to pump out code. I also don’t think that the current, “shallow” understanding that they have is amenable to doing so.

    I think that the things that LLMs work well at is in producing stuff that is different, but appears to a human to be similar to other content. There are a variety of uses that that work, to varying degrees, for content consumed by humans.

    But humans deal well with errors in what we see. The kinds of errors in AI-generated images aren’t a big issue for us – they just need to cue up our memories of things in our head. Programming languages are not very amenable to that. And I don’t think that there’s a very effective way to lower that rate.

    I think that it might be possible to make use of an LLM-driven “warning” system when writing software; I’m not sure if someone has done something like that. Think of something that works the way a grammar checker does for natural language. Having a higher error rate is acceptable there. That might reduce the amount of labor required to write code, though I don’t think that it’ll replace it.

    Maybe it’s possible to look for common security errors to flag for a human by training a model to recognize those.

    I also think that software development is probably one of the more-heavily-automated fields out there because, well, people who write software make systems to do things over and over. High-level programming languages rather than writing assembly, software libraries, revision control…all that was written to automate away parts of tasks. I think that in general, a lot of the low-hanging fruit has been taken.

    Does that mean that I think that software cannot be written by AI? No. I am sure that AI can write software. But I don’t think that the AI systems that we have today, or systems that are slightly tweaked, or systems that just have a larger model, or something along those lines, are going to be what takes over software development. I also think that the kind of hurdles that we’d need to clear to really fully write software from an AI require us to really get near an AI that can do anything that a human can do. I think that we will eventually get there, and when we get there, we’ll see human labor in general be automated. But I don’t think that OpenAI or Microsoft are a year away from that.

    System and network administration

    Again, I’m skeptical that interacting with computers is where LLMs are going to be the most-effective. Computers just aren’t that tolerant of errors. Most of the things that I can think of that you could use an AI to do, like automated configuration management or something, already have some form of automated tools in that role.

    Also, I think that obtaining training data for this corpus is going to be a pain. That is, I don’t think that sysadmins are going to generally be okay with you logging what they’re doing to try to build a training corpus, because in many cases, there’s potential for leaks of sensitive information.

    And a lot of data in that training corpus is not going to be very timeless. Like, watching someone troubleshoot a problem with a particular network card…I’m not sure how relevant that’s going to be for later hardware.

    Quality Assurance

    This involves too many different things for me to make a guess. I think that there are maybe some tasks that some QA people do today that an LLM could do. Instead of using a fuzzer to throw input in for testing, maybe have an AI to predict what a human would do.

    Maybe it’s possible to build some kind of model mapping instructions to operations with a mouse pointer on a screen and then do something that could take English-language instructions to try to generate actions on that screen.

    But I’ve also had QA people do one-off checks, or things that aren’t done at mass scale, and those probably just aren’t all that sensible to automate, AI or no. I’ve had them do tasks in the real world (“can you go open up the machine seeing failures and check what the label on that chip on the machine that’s getting problems reads, because it’s reporting the same part number in software”). I’ve written test plans for QA to run on things I’ve built, and had them say “this is ambiguous”. My suspicion is that an LLM trained on what information is out there is going to have a hard time, without a deep understanding of a system, to be able to say “this is ambiguous”.

    Overall

    There are other areas. But I think that any answer is probably “to some degree, depending upon what area of tech work, but mostly not, not with the kind of AI systems that exist today or with minor changes to existing systems”.

    I think that a better question than “can this be done with AI” is “how difficult is this job to do with AI”. I mean, I think that eventually, pretty much any job could probably be done by an AI. But I think that some are a lot harder than others. In general, the ones that are more-amenable are, I think, those where one can get a good training corpus – a lot of recorded data showing how to do the task correctly and incorrectly. I think that, at least using current approaches, tasks that are somewhat-tolerant of errors are better. For any form of automation, AI or no, tasks that need to be done repeatedly many times over are more-amenable to automation. Using current approaches, problems that can be solved by combining multiple things from a training corpus in simple ways, without a deep understanding, not needing context about the surrounding world or such, are more amenable to being done by AI.

    • Kraiden@kbin.earth
      link
      fedilink
      arrow-up
      3
      ·
      1 day ago

      re: The warning/grammer checking system.

      What you’re describing is called a linter, and they’ve existed for ages.

      The only way I can really think of to improve them would be to give them a full understanding of your codebase as a whole, which would require a deeper understanding than current gen AI is capable of. There might be some marginal improvements possible with current gen, but it’s not going to be groundbreaking.

      What I have found AI very useful for is basic repetitive stuff that isn’t easily automated in other ways or that I simply can’t be bothered to write again. eg: “Given this data model, generate a validated CRUD form” or “write a bash script that renames all the files in a folder to follow this pattern”

      You still need to check what it produces though because it will happily hallucinate parameters that don’t exist, or entire validation libraries that don’t exist, but it’s usually close enough to be used as a starting point.

      • tal@lemmy.today
        link
        fedilink
        English
        arrow-up
        1
        ·
        21 hours ago

        What you’re describing is called a linter, and they’ve existed for ages.

        Yup, and I’ve used them, but they’ve had hardcoded rules, not had models just trained on code.

    • Appoxo@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      6
      ·
      1 day ago

      What I’d like is to plug in the manual and FAQ of some software or whatever and be able to ask specific questions about the setup/configuration.

      Now who is going to write the documentation ;)

      • snooggums@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        1 day ago

        Obviously ai will write the documentation that is read by the ai which will inform another ai to do the work and a fourth ai does testing so that an ai farm can use the software to buy stocks or something.

    • SlopppyEngineer@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      1 day ago

      I suspect for a bunch of projects, AI going to make programming itself obsolete. If it comes pre-trained to use a number of libraries, protocols and databases, giving the thing a bunch of specifications and scenarios and let it do the actual work of doing bookkeeping or whatever becomes possible. Most managers would jump on the idea to throw extra hardware at a problem to run AI locally is it means shipping in half the time. As long as the problem to solve is generic enough and not too big. And those limits will go up quickly.

    • Modern_medicine_isnt@lemmy.worldOP
      link
      fedilink
      arrow-up
      3
      ·
      1 day ago

      Great write up. A few things caugyt my eye. You meantioned AI realtime checking code as it is written. IDEs do a pretty good job of that already. To do much better, it would have to know what you want to do. And that seems to be a barrier to how AI is developed today. It doesn’t “understand” why.

      Now QA is interesting. I wonder if anyone has built a model based entirely on clicks that can predict where a user is going to click. That would be very interesting. It would work really well for testing functionality that is already common on existing sites. Most webapps are made up of a large part of things already done… date chooser, question submitters, and such. Like how many apps out there are for scheduling an appointment. Tons. And so many apps (even mobile games) are just the same thing in a custom facade. In this case I don’t think it would replace QA much as places writing that stuff don’t test much. But it could speed up developers by reducing the number of customer reported issues in code they wrote months ago.

  • beliquititious@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    15
    ·
    1 day ago

    I went to taco bell the other day and they had an AI taking orders in the drive thru, but it seemed like they had the same number of workers.

    They also weren’t happy I tried to mess with the ai.

  • dukeofdummies@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    1 day ago

    Had a new hire try to do all his automation programming in python with an AI. It was horrifying.

    Lists and lists and lists of if else statements they caught if a button errored but never caught if it did the right thing. 90% of their bug reports were directly due to their own code. Trivially provable.

    Work keeps trying to tell us to use more AI but refuses to mention whether the training data is using company emails. If it is then a buttload of unlabeled non public data is getting fed into it. So only a matter of time until a “fun fact” from the AI turns into a nightmare.

    Most of our stuff is in an obscure field with outdated code, so any coding assistance is not really that impressive.