What do you think?

I think in the face of AI taking over many tasks, we need to rethink about how we frame the future of society. Reframing Universal Basic Income as Automation Compensation means presenting the policy as a way to make up for jobs and income lost due to automation and AI. Instead of viewing UBI as a general welfare payment, it becomes seen as compensation paid to everyone for the value automation creates, supporting those whose work is replaced by machines and helping everyone share in productivity gains. Especially in the US, the average person doesn’t like the idea of someone getting something that they’re personally not receiving. So framing it as a compensation that everyone receives regardless of employment status I think is the only feasible way forward.

  • Echo Dot@feddit.uk
    link
    fedilink
    arrow-up
    4
    ·
    1 天前

    Especially in the US, the average person doesn’t like the idea of someone getting something that they’re personally not receiving. So framing it as a compensation that everyone receives regardless of employment status I think is the only feasible way forward.

    Universal basic income is supposed to be irrespective of employment status already. That’s what the “Universal” bit means.

    Any work that you do you get compensated on top of UBI, that way it allows for things like working half the week and using UBI to top up your finances, companies could employ you two and a half days a week and another person the other two and a half days a week with each person getting a week’s worth of pay thanks to UBI, the company doesn’t have to pay out anymore, you get the same amount of money as you did before and you’ve increased the job market by 200 times. It’s a win-win-win situation.

    The problem is framing UBI as a welfare benefit, when really that’s not how it’s supposed to be understood.

  • Rhynoplaz@lemmy.world
    link
    fedilink
    arrow-up
    27
    ·
    2 天前

    You might be on the right track. We’ve been selling these ideas to the people who already want them, we need to expand the market!

    • jaykrown@lemmy.worldOP
      link
      fedilink
      arrow-up
      5
      ·
      2 天前

      I feel like there’s enough support around this idea to form a focused community. How would it look in reality? That’s something that needs to be discussed and work needs to be done to find ways to implement it with realistic expectations. Real world examples of automation like in an automobile factory. All those people who would have done that work now may not have a job, and thus no money to afford the automobiles produced at the factory. What is the lost opportunity?

      • HurricaneLiz@lemmy.world
        link
        fedilink
        arrow-up
        4
        arrow-down
        2
        ·
        2 天前

        I was just asking Gemini yesterday what research is going on into ways to pay the owners of the websites it scrapes data from for answers and it said it’s in the works. There will be a lot more product recommendations tho, in lieu of the way ads are currently structured.

        If this concept can then be expanded to encompass everyone whose data was stolen for training models, that’d be UBI.

  • chicken@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    3
    ·
    2 天前

    It’s hard enough just to get people to stop trying to call non-universal, means tested welfare payments UBI, even though it’s only three words and one of them is Universal and that’s what it’s supposed to mean, that everyone gets it.

    Honestly I think the best option would be to frame it as massive wealth redistribution, from corporations and the wealthy, to everyone else. Might seem counterintuitive, since to a lot of people that would sound kind of bad, but without being founded in such redistribution there’s no possible way it could actually work and be sustainable. If the idea of UBI gets any traction, I predict the main threat to its success will be “have your cake and eat it too” implementation proposals that can’t actually work because they don’t redistribute wealth, that people will eat up because they don’t understand or believe in economics. So make wealth transfer a core part of the messaging to head that off, and fight the entrenched interests directly.

  • pelespirit@sh.itjust.works
    link
    fedilink
    arrow-up
    11
    ·
    2 天前

    What I’ve seen as the most talked about part of this is, how are we going to pay for it. Of course, wealth tax has to be a huge part of it. We’ve seen the following work, so it’s not hard to understand. The billionaires don’t want it:

    • Social Security, but the companies only are the ones to foot the bill
    • Alaska dividends, same concept but of course, a lot more money.
    • Medicare, Medicaid and people on disability.
    • Botzo@lemmy.world
      link
      fedilink
      arrow-up
      5
      ·
      2 天前

      Just from the top of my head:

      • Permanent tax on corporations that layoff people until they rehire to the same level.
      • An exise tax on AI use by businesses.

      Both of these would of course get me labeled an antichrist by Peter Thiel. And since AI is propping up the world economy right now, has 0 chance of happening.

      • AnarchistArtificer@slrpnk.net
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        2 天前

        “Permanent tax on corporations that layoff people until they rehire to the same level.”

        This is similar to what the historical Luddites were arguing for. (Probably worth clarifying that I say this as a good thing. The Luddites failed because they were working at a time when unions were literally illegal; the political conditions were just too stacked against them. However, there’s a lot of useful things we can learn from history, and this is one of them)

        Edit: formatting

      • shalafi@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        2
        ·
        2 天前

        Fine, then no company will ever expand their staff. Can’t risk a downturn a ways down the road.

        You’ve invented a new way to increase unemployment. :)

        • pelespirit@sh.itjust.works
          link
          fedilink
          arrow-up
          4
          ·
          2 天前

          Dude, every company that thought AI could take over a job, they tried it. Do you think they’re trying to keep employees?

        • Botzo@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          2 天前

          Companies lay people off when the stock doesn’t grow the right way, even when they’re highly profitable.

          The Jack Welch playbook has fucked the concept of business success so hard we can’t even recognize what a huge pile of shit it has become. It needs a reset.

    • shalafi@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 天前

      Not enough available to tax to pull this off. BUT, when you factor in dropping all other social services, now we’re a lot closer.

      • pelespirit@sh.itjust.works
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        2 天前

        So, this would be based on crypto? That’s what I understand, like the stable coin. I have many questions that they didn’t really cover.

        It seems that the way the dividends come about is by loaning out money, your $3 becomes $97. Is that correct? If so:

        • Who is handling these transactions and overhead?
        • What if people don’t pay back the loan?
        • What if that money is stolen? Crypto can be easily corrupted and traced.

        There’s more questions. I’m not trying to shoot it down, I just want to understand.

        Edit, is it still tied to SOFR?

        However, it may still be vulnerable to manipulation. Banks can borrow and lend at biased rates in the wholesale funding market, which can lead them to profit in the much larger market for benchmark-indexed contracts.[8] It was therefore suggested that the lending costs of individual banks be published to increase transparency and deter manipulation.[8]

        The Bank for International Settlements, which serves as the bank for central banks, said in March 2019 that a one-size-fits-all alternative may be neither feasible nor desirable. Although SOFR solves the rigging problem, it does not help participants gauge how stressed global funding markets are. That means SOFR is likely to coexist with something else.[13]

        https://en.wikipedia.org/wiki/SOFR

        • kibiz0r@midwest.social
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 天前

          Not crypto. Just digital. So, centralized, subject to anti-fraud regulation, reversible transactions, etc.

          Not loaned out. Explicitly marked as not-loanable. Which would be foolish in today’s market, because you’re losing out on a dividend. Except… the bank actually keeps most of the benefit from your deposit being loanable normally. This way, you get the benefit instead.

          Basically, it allows depositors to compete against the banks. So they can’t take you for granted, because you actually have alternative.

  • PositiveNoise@piefed.social
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    2 天前

    I think that if people get to have Universal Basic Income, and society can be arranged to provide it without causing big problems to society, then it doesn’t need to be tied to Automation in any way, and instead can simply be viewed as a core benefit of being a member of society. That seems like a more elegant approach.

    People would not want to be told ‘oh, it seems like we are going to scale back automation some, so everyone is going to only get 50% of the UBI they have been receiving previously’.

    • jaykrown@lemmy.worldOP
      link
      fedilink
      arrow-up
      2
      ·
      2 天前

      The idea isn’t the problem, I think it’s the framing. The word “income” is charged, and it’s what people associate with work. The word “compensation” is more fitting because we are being compensated for the fact that work is much more scarce due to increasing automation. It also implies that we are owed, rather than receiving an “income” we didn’t directly work for. No one is going to scale back automation, that’s never how it’s worked.

  • coolman@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    2 天前

    I fully agree with this, but I’m of the belief that in order to fund it, we need to tax the “labor” that companies are saving with AI. If a company names profit normally with humans, they are creating a system in which the government is getting paid twice, once on the income of the company and once on the income of the people. But if AI just takes half of that away, the country is missing out on trillions of tax dollars.

    So what’s the plan? Require all companies to disclose their electric bills and what they used that power for. If it’s AI? Tax them a rate dependent on the size of the company and the size of the AI portion. This has the additional benefit of incentivizing companies to simply hire people again.

    This will never happen but I can dream.

  • AnarchistArtificer@slrpnk.net
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 天前

    This sounds interesting. It reminds me of past workers movements in history, namely the Luddites and the UK miners strike. If you want to learn more about the Luddites and what they were asking for, the journalist Brian Merchant has a good book named “Blood in the Machine”.

    Closer to my heart and my lived experience is the miner’s strike. I wasn’t born at the time, but I grew up in what I semi-affectionately call a “post industrial shit hole”. A friend once expressed curiosity about what an alternative to shutting the mines would have been, especially in light of our increasing knowledge of needing to move away from fossil fuels. A big problem with what happened with the mines is that there were entire communities that were effectively based around the mines.

    These communities often did have other sources of industry and commerce, but with the mines gone, it fucked everything up. There weren’t enough opportunities for people afterwards, especially because miners skills and experience couldn’t easily translate to other skilled work. Even if a heckton of money had been provided to “re-skill” out of work miners, that wouldn’t have been enough to absorb the economic calamity caused by abruptly closing a mine, precisely because of how locally concentrated and effect would be. If done all at once, for instance, you’d find a severe shortage of teachers and trainers, who would then find themselves in a similar position of needing to either move elsewhere to find work, or train in a different field. The key was that there needed to be a transition plan that would acknowledge the human and economic realities of closing the mines.

    Many argued, even at the time, that a gradual transition plan that actually cared about the communities affected would lead to much greater prosperity for all. Having grown up amongst the festering wounds of the miners strike, I feel this to be true. Up in the North of England, there are many who feel like they have been forgotten or discarded by the system. That causes people a lot of pain; I think it’s typical for people to want their lives to be useful in some way, but the Northern, working class manifestation of this instinct is particularly distinct.

    Linking this back to your question, I think that framing it as compensation could help, but I would expect opposition to remain as long as people don’t feel like they have ways to be useful. A surprising contingent of people who dislike social security payments that involve “getting something for nothing” are people who themselves would be beneficiaries of such payments. I link this perspective to listlessness I describe in ex-mining communities. Whilst the vast majority of us are chronically overworked (including those who may be suffering from underemployment due to automation), most people do actually want to work. Humans are social creatures, and our capacities are incredibly versatile, so it’s only natural for us to want to labour towards some greater good. I think that any successful implementation of universal basic income would require that we speak to this desire in people, and help to build a sense that having their basic living costs accounted for is an opportunity for them to do something meaningful with their time.

    Voluntary work is the straightforward answer to this, and indeed, some of the most fulfilled people I know are those who can afford to work very little (or not at all), but are able to spend their time on things they care about. However, I see so many people not recognise what they’re doing as meaningful labour. For example, I go to a philosophy discussion group where there is one main person who liaises with the venue, collects the small fee every week (£3 per person), updates the online description for the event and keeps track of who is running each session, recruiting volunteers as needed. He doesn’t recognise the work he does as being that much work, and certainly doesn’t feel it’s enough to warrant the word “labour”. “It’s just something I do to help”; “You’re making it sound like something larger than it is — someone has to do it”. I found myself (affectionately) frustrated during this conversation because it highlights something I see everywhere: how capitalism encourages us to devalue our own labour, especially reproductive labour and other socially valuable labour. There are insufficient opportunities for meaningful contribution within the voluntary sector as it exists now, but so much of what people could and would be doing more of exists outside of that sector.

    We need a cultural shift in how we think about work. However, it’s harder to facilitate that cultural shift towards how we view labour if most people are forced to only see their labour in terms of wages and salaries. On the other hand, people are more likely to resist policies like UBI if they feel it presents a threat to their work-centred identity and their ability to conceive of their existence as valuable. It’s a tricky chicken-or-egg problem. Overall, this is why I think your framing could be useful, but is not likely to be sufficient to change people’s minds. I think that UBI or similar certainly is possible, but it’s hard to imagine it being implemented in our current context due to how radical it is. Far be it from me to shy away from radical choices, but I think that it’s necessary to think of intermediary steps towards cultivating class consciousness and allowing people to conceive of a world where their Intrinsic value is decoupled from their output under capitalism. For instance, I can’t fathom how universal basic income would work in a US without universal healthcare. It boggles my mind how badly health insurance acts to reinforce coercive labour relations. The best thing we can do to improve people’s opinion of universal basic income is to improve their material conditions.

    Finally, on AI. I think my biggest disagreement with Automation Compensation as a framing device for UBI is that it inadvertently falls into the trap of “tech critihype”, which the linked author describes as “[inverting] boosters’ messages — they retain the picture of extraordinary change but focus instead on negative problems and risks.”. Critihype may appear to criticise something, but actually ends up feeding the hype cycle, and in turn, is nourished by it. The problem with AI isn’t that it is going to end up replacing a significant chunk of the workforce, but rather that penny-pinching managers can be convinced that AI is (or will be) able do that.

    I like the way that Brian Merchant describes the real problem of AI on his blog:

    "[…] the real AI jobs crisis is that the drumbeat, marketing, and pop culture of “powerful AI” encourages and permits management to replace or degrade jobs they might not otherwise have. More important than the technological change, perhaps, is the change in a social permission structure.”

    This critical approach is extra important when we consider that the jobs and fields most heavily being affected by AI are in creative fields. We’ve probably all seen memes that say “I want an AI to automate doing the dishes so that I can do art, not automate doing art so I can spend more time doing the dishes”. Universal Basic Income would be limited in alleviating social angst unless we can disrupt the pervasive devaluation of human life and effort that the AI hype machine is powering.

    Though I have ended up disagreeing with your suggestion, thanks for posing this question. It’s an interesting one to ponder, and I certainly didn’t expect to write this much when I started. I hope you find my response equally interesting.

    • jaykrown@lemmy.worldOP
      link
      fedilink
      arrow-up
      2
      ·
      2 天前

      The problem with AI isn’t that it is going to end up replacing a significant chunk of the workforce, but rather that penny-pinching managers can be convinced that AI is (or will be) able do that.

      This to me is such an interesting perspective which I’ve read a lot of people write the past couple months. AI will absolutely replace a significant chunk of the workforce, there are many jobs that are repetitive and very close to being automated. Any type of manual data entry or customer service are at serious risk. I strongly suggest you do some research into what the most powerful models are capable of before forming an opinion.

      For instance if you want some examples:

      “ElevenLabs’ latest 2025 update delivers true text-voice multimodal conversational agents, real-time adaptive speech, support for 73 languages, deep emotional range, and native “multimodality” for both text and speech inputs, with a roadmap for further cross-modal features. Google Gemini’s most recent update, released November 2025, introduced Gemini 2.5 Pro and Flash, which feature real-time collaborative “Live” mode, massive context handling (1M tokens), improved multimodal capabilities (native text, image, audio, video reasoning), and a “Deep Think” mode for advanced reasoning—cementing Gemini 2.5 as a best-in-class AI for both data entry and complex support.”

      Like you want a real time agent that replies to your customer service questions regarding a product in authentic sounding speech? I can point you to the tools to build it in a couple weeks.

      • AnarchistArtificer@slrpnk.net
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 天前

        You’re literally quoting marketing materials to me. For what it’s worth, I’ve already done more than enough research to understand where the technology is at; I dove deep into learning about machine learning in 2020, when AlphaFold 2 was taking the structural biology world by storm — I wanted to understand how it had done what it had, which started a long journey of accidentally becoming a machine learning expert (at least, compared to other biochemists and laypeople).

        That knowledge informs the view in my original comment. I am (or at least, was) incredibly excited about the possibilities, and I do find much of this extremely cool. However, what has dulled my hype is how AI is being indiscriminately shoved into every orifice of society when the technology simply isn’t mature enough for that yet. Will there be some fields that experience blazing productivity gains? Certainly. But I fear any gains will be more than negated through losses in sectors where AI should not be deployed, or it should be applied more wisely.

        Fundamentally, when considering its wider effect on society, I simply can’t trust the technology — because in the vast majority of cases where it’s being pushed, there’s a thoroughly distrustful corporation behind it. What’s more, there’s increasing evidence that this just simply isn’t scalable. When you look at the actual money behind it, it becomes clear that the reason why it’s being pushed as a magical universal multi tool is because the companies making these models can’t make them profitable, but if they can drum up enough investor hype, they can keep kicking that can down the road. And you’re doing their work for them — you’re literally quoting advertising materials for me; I hope you’re at least getting paid for it.

        I remain convinced that the models that are most prominent today are not going to be what causes mass automation on the scale you’re suggesting. They will, no doubt, continue to improve — there’s so many angles of attack on that front: Mixture of Experts (MoE) and model distillation to reduce model size (this is what made DeepSeek so effective); Retrieval Augmented Generation (RAG) to reduce hallucinations and allow for fine-tuning of output based on a small scale based on a supplementary knowledgebase; reducing the harmful effects of training on synthetic data so you can do more of it before model collapse happens — there’s countless ways that they can incrementally improve things, but it’s just not enough to overcome the hard limits on these kinds of models.

        My biggest concern, as a scientist, is that what additional progress there could be in this field is being hampered by the excessive evangelising of AI by investors and other monied interests. For example, if a company wanted to make a bot for low-risk customer service or internal knowledgebase used RAG, this would require the model to have access to high quality documentation to draw from — and speaking as someone who has contributed a few times to open-source software documentation, let me tell you that that documentation is, on average, pretty poor quality (and open source is typically better than closed source for this, which doesn’t bode well). Devaluing of human expertise and labour is just shooting ourselves in the foot because what is there to train on if most of the human writers are sacked.

        Plus there’s the typical old notion around automation leading to loss of low skilled jobs, but the creation of high skilled roles to fix and maintain the “robots”. This isn’t even what’s happening, in my experience. Even people in highly skilled, not-currently-possible-to-automate jobs are being pushed towards AI pipelines that are systematically deskilling them; we have skilled computer scientists and data scientists who are unable to understand what goes wrong when one of these systems fucks up, because all the biggest models are just closed boxes, and “troubleshooting” means acting like an entry level IT technician and just trying variations of turning it off and on again. It’s not reasonable to expect these systems to be perfect — after all, humans aren’t perfect. However, if we are relying on systems that tend to make errors that are harder for human oversight to catch, as well as reducing the number of people trying to catch them, then that’s a recipe for trouble.

        Now, I suspect here is where you might say “why bother having humans try to catch the errors when we have multimodal agentic models that are able to do it all”. My answer to that is that it’s a massive security hole. Humans aren’t great at vetting AI output, but we are tremendously good at breaking it. I feel like I read a paper for some ingeniously novel hack of AI every week (using “hack” as a general term for all prompt injection, jailbreak etc. stuff). I return to my earlier point: the technology is not mature enough for such widespread, indiscriminate rollout.

        Finally, we have the problem of legal liability. There’s that old IBM slide that’s repeatedly done the rounds the last few years that says “A computer can never be held accountable, therefore a computer must never make a management decision.”. Often the reason why we need humans to keep an eye on systems is that legal systems demand at least the semblance of accountability, and we don’t have legal frameworks for figuring out what the hell to do when AI or other machine learning systems mess up. It was recently in the news about police officers going to ticket an automated taxi (a Waymo, I think) when it broke traffic laws, and not knowing what to do when they found it was driverless. Sure, parking fines can be sent to the company, that doesn’t seem too hard to write regulations for, but with human drivers, if you incur a large number of small violations, it’s typical to end up with a larger punishment, such as one’s driver’s licence being suspended. What would even be the equivalent level of higher punishment for driverless vehicles? It seems that no-one knows, and concerns like these are causing regulators to reconsider the rollout of them. Sure, new laws can be passed, but our legislators are often tech illiterate, so I don’t expect them to easily be able to solve what prominent legal and technology scholars are still grappling with. That process will take time, and the more that we see high profile cases like suicides following chatbot conversations, the cautious legislators will be. Public distrust of AI is growing, in large part because they feel like it’s being forced on them, and that will just harm the technology in the long run.

        I genuinely am excited still about the nuts and bolts of how all this stuff works. It’s my genuine enthusiasm that I feel situates me well to criticise the technology, because I’m coming from an earnest place of wanting to see humans make cool stuff that improves lives — that’s why I became a scientist, after all. This, however, does not feel like progress. Technology doesn’t exist in a vacuum and if we don’t reckon with the real harms and risks of a new tool, we risk shutting ourselves off to the positive outcomes too.

    • jaykrown@lemmy.worldOP
      link
      fedilink
      arrow-up
      2
      ·
      2 天前

      So framing it as a compensation that everyone receives regardless of employment status I think is the only feasible way forward.

        • jaykrown@lemmy.worldOP
          link
          fedilink
          arrow-up
          1
          ·
          2 天前

          Instead of viewing UBI as a general welfare payment, it becomes seen as compensation paid to everyone for the value automation creates, supporting those whose work is replaced by machines and helping everyone share in productivity gains.

          • lolola@lemmy.blahaj.zone
            link
            fedilink
            arrow-up
            2
            arrow-down
            1
            ·
            edit-2
            2 天前

            Edit: never mind. That image made me upset, like I’m too stupid to read or something. I don’t want to be in this conversation.