• 7 Posts
  • 1.48K Comments
Joined 2 年前
cake
Cake day: 2023年6月9日

help-circle
  • There are no bad dogs, only bad dog owners. And whilst I’m sympathetic to owners of dogs with eldritch powers, I will absolutely hold them responsible if they own a dog that’s unsuited for their lifestyle and capability. If they weren’t up to the task, they should have gone for an easier to handle breed, like a border collie, or a husky.


  • You’re literally quoting marketing materials to me. For what it’s worth, I’ve already done more than enough research to understand where the technology is at; I dove deep into learning about machine learning in 2020, when AlphaFold 2 was taking the structural biology world by storm — I wanted to understand how it had done what it had, which started a long journey of accidentally becoming a machine learning expert (at least, compared to other biochemists and laypeople).

    That knowledge informs the view in my original comment. I am (or at least, was) incredibly excited about the possibilities, and I do find much of this extremely cool. However, what has dulled my hype is how AI is being indiscriminately shoved into every orifice of society when the technology simply isn’t mature enough for that yet. Will there be some fields that experience blazing productivity gains? Certainly. But I fear any gains will be more than negated through losses in sectors where AI should not be deployed, or it should be applied more wisely.

    Fundamentally, when considering its wider effect on society, I simply can’t trust the technology — because in the vast majority of cases where it’s being pushed, there’s a thoroughly distrustful corporation behind it. What’s more, there’s increasing evidence that this just simply isn’t scalable. When you look at the actual money behind it, it becomes clear that the reason why it’s being pushed as a magical universal multi tool is because the companies making these models can’t make them profitable, but if they can drum up enough investor hype, they can keep kicking that can down the road. And you’re doing their work for them — you’re literally quoting advertising materials for me; I hope you’re at least getting paid for it.

    I remain convinced that the models that are most prominent today are not going to be what causes mass automation on the scale you’re suggesting. They will, no doubt, continue to improve — there’s so many angles of attack on that front: Mixture of Experts (MoE) and model distillation to reduce model size (this is what made DeepSeek so effective); Retrieval Augmented Generation (RAG) to reduce hallucinations and allow for fine-tuning of output based on a small scale based on a supplementary knowledgebase; reducing the harmful effects of training on synthetic data so you can do more of it before model collapse happens — there’s countless ways that they can incrementally improve things, but it’s just not enough to overcome the hard limits on these kinds of models.

    My biggest concern, as a scientist, is that what additional progress there could be in this field is being hampered by the excessive evangelising of AI by investors and other monied interests. For example, if a company wanted to make a bot for low-risk customer service or internal knowledgebase used RAG, this would require the model to have access to high quality documentation to draw from — and speaking as someone who has contributed a few times to open-source software documentation, let me tell you that that documentation is, on average, pretty poor quality (and open source is typically better than closed source for this, which doesn’t bode well). Devaluing of human expertise and labour is just shooting ourselves in the foot because what is there to train on if most of the human writers are sacked.

    Plus there’s the typical old notion around automation leading to loss of low skilled jobs, but the creation of high skilled roles to fix and maintain the “robots”. This isn’t even what’s happening, in my experience. Even people in highly skilled, not-currently-possible-to-automate jobs are being pushed towards AI pipelines that are systematically deskilling them; we have skilled computer scientists and data scientists who are unable to understand what goes wrong when one of these systems fucks up, because all the biggest models are just closed boxes, and “troubleshooting” means acting like an entry level IT technician and just trying variations of turning it off and on again. It’s not reasonable to expect these systems to be perfect — after all, humans aren’t perfect. However, if we are relying on systems that tend to make errors that are harder for human oversight to catch, as well as reducing the number of people trying to catch them, then that’s a recipe for trouble.

    Now, I suspect here is where you might say “why bother having humans try to catch the errors when we have multimodal agentic models that are able to do it all”. My answer to that is that it’s a massive security hole. Humans aren’t great at vetting AI output, but we are tremendously good at breaking it. I feel like I read a paper for some ingeniously novel hack of AI every week (using “hack” as a general term for all prompt injection, jailbreak etc. stuff). I return to my earlier point: the technology is not mature enough for such widespread, indiscriminate rollout.

    Finally, we have the problem of legal liability. There’s that old IBM slide that’s repeatedly done the rounds the last few years that says “A computer can never be held accountable, therefore a computer must never make a management decision.”. Often the reason why we need humans to keep an eye on systems is that legal systems demand at least the semblance of accountability, and we don’t have legal frameworks for figuring out what the hell to do when AI or other machine learning systems mess up. It was recently in the news about police officers going to ticket an automated taxi (a Waymo, I think) when it broke traffic laws, and not knowing what to do when they found it was driverless. Sure, parking fines can be sent to the company, that doesn’t seem too hard to write regulations for, but with human drivers, if you incur a large number of small violations, it’s typical to end up with a larger punishment, such as one’s driver’s licence being suspended. What would even be the equivalent level of higher punishment for driverless vehicles? It seems that no-one knows, and concerns like these are causing regulators to reconsider the rollout of them. Sure, new laws can be passed, but our legislators are often tech illiterate, so I don’t expect them to easily be able to solve what prominent legal and technology scholars are still grappling with. That process will take time, and the more that we see high profile cases like suicides following chatbot conversations, the cautious legislators will be. Public distrust of AI is growing, in large part because they feel like it’s being forced on them, and that will just harm the technology in the long run.

    I genuinely am excited still about the nuts and bolts of how all this stuff works. It’s my genuine enthusiasm that I feel situates me well to criticise the technology, because I’m coming from an earnest place of wanting to see humans make cool stuff that improves lives — that’s why I became a scientist, after all. This, however, does not feel like progress. Technology doesn’t exist in a vacuum and if we don’t reckon with the real harms and risks of a new tool, we risk shutting ourselves off to the positive outcomes too.


  • Your comment fills me with a deep dread that causes me to feel like saying something to discourage you from this path. Alas, it’s not your preparation that is causing that feeling, but the grim circumstances that necessitate this kind of planning.

    It’s difficult being on the other side of the world and completely unable to do anything than just watch as America descends deeper into fascism. However, I’m glad that I am not in the impossible position of making the decisions you’re making. I’m sorry that you are.

    Good luck, I hope you don’t die. And I hope that people like you are able to claw back democracy from the fascists






  • This sounds interesting. It reminds me of past workers movements in history, namely the Luddites and the UK miners strike. If you want to learn more about the Luddites and what they were asking for, the journalist Brian Merchant has a good book named “Blood in the Machine”.

    Closer to my heart and my lived experience is the miner’s strike. I wasn’t born at the time, but I grew up in what I semi-affectionately call a “post industrial shit hole”. A friend once expressed curiosity about what an alternative to shutting the mines would have been, especially in light of our increasing knowledge of needing to move away from fossil fuels. A big problem with what happened with the mines is that there were entire communities that were effectively based around the mines.

    These communities often did have other sources of industry and commerce, but with the mines gone, it fucked everything up. There weren’t enough opportunities for people afterwards, especially because miners skills and experience couldn’t easily translate to other skilled work. Even if a heckton of money had been provided to “re-skill” out of work miners, that wouldn’t have been enough to absorb the economic calamity caused by abruptly closing a mine, precisely because of how locally concentrated and effect would be. If done all at once, for instance, you’d find a severe shortage of teachers and trainers, who would then find themselves in a similar position of needing to either move elsewhere to find work, or train in a different field. The key was that there needed to be a transition plan that would acknowledge the human and economic realities of closing the mines.

    Many argued, even at the time, that a gradual transition plan that actually cared about the communities affected would lead to much greater prosperity for all. Having grown up amongst the festering wounds of the miners strike, I feel this to be true. Up in the North of England, there are many who feel like they have been forgotten or discarded by the system. That causes people a lot of pain; I think it’s typical for people to want their lives to be useful in some way, but the Northern, working class manifestation of this instinct is particularly distinct.

    Linking this back to your question, I think that framing it as compensation could help, but I would expect opposition to remain as long as people don’t feel like they have ways to be useful. A surprising contingent of people who dislike social security payments that involve “getting something for nothing” are people who themselves would be beneficiaries of such payments. I link this perspective to listlessness I describe in ex-mining communities. Whilst the vast majority of us are chronically overworked (including those who may be suffering from underemployment due to automation), most people do actually want to work. Humans are social creatures, and our capacities are incredibly versatile, so it’s only natural for us to want to labour towards some greater good. I think that any successful implementation of universal basic income would require that we speak to this desire in people, and help to build a sense that having their basic living costs accounted for is an opportunity for them to do something meaningful with their time.

    Voluntary work is the straightforward answer to this, and indeed, some of the most fulfilled people I know are those who can afford to work very little (or not at all), but are able to spend their time on things they care about. However, I see so many people not recognise what they’re doing as meaningful labour. For example, I go to a philosophy discussion group where there is one main person who liaises with the venue, collects the small fee every week (£3 per person), updates the online description for the event and keeps track of who is running each session, recruiting volunteers as needed. He doesn’t recognise the work he does as being that much work, and certainly doesn’t feel it’s enough to warrant the word “labour”. “It’s just something I do to help”; “You’re making it sound like something larger than it is — someone has to do it”. I found myself (affectionately) frustrated during this conversation because it highlights something I see everywhere: how capitalism encourages us to devalue our own labour, especially reproductive labour and other socially valuable labour. There are insufficient opportunities for meaningful contribution within the voluntary sector as it exists now, but so much of what people could and would be doing more of exists outside of that sector.

    We need a cultural shift in how we think about work. However, it’s harder to facilitate that cultural shift towards how we view labour if most people are forced to only see their labour in terms of wages and salaries. On the other hand, people are more likely to resist policies like UBI if they feel it presents a threat to their work-centred identity and their ability to conceive of their existence as valuable. It’s a tricky chicken-or-egg problem. Overall, this is why I think your framing could be useful, but is not likely to be sufficient to change people’s minds. I think that UBI or similar certainly is possible, but it’s hard to imagine it being implemented in our current context due to how radical it is. Far be it from me to shy away from radical choices, but I think that it’s necessary to think of intermediary steps towards cultivating class consciousness and allowing people to conceive of a world where their Intrinsic value is decoupled from their output under capitalism. For instance, I can’t fathom how universal basic income would work in a US without universal healthcare. It boggles my mind how badly health insurance acts to reinforce coercive labour relations. The best thing we can do to improve people’s opinion of universal basic income is to improve their material conditions.

    Finally, on AI. I think my biggest disagreement with Automation Compensation as a framing device for UBI is that it inadvertently falls into the trap of “tech critihype”, which the linked author describes as “[inverting] boosters’ messages — they retain the picture of extraordinary change but focus instead on negative problems and risks.”. Critihype may appear to criticise something, but actually ends up feeding the hype cycle, and in turn, is nourished by it. The problem with AI isn’t that it is going to end up replacing a significant chunk of the workforce, but rather that penny-pinching managers can be convinced that AI is (or will be) able do that.

    I like the way that Brian Merchant describes the real problem of AI on his blog:

    "[…] the real AI jobs crisis is that the drumbeat, marketing, and pop culture of “powerful AI” encourages and permits management to replace or degrade jobs they might not otherwise have. More important than the technological change, perhaps, is the change in a social permission structure.”

    This critical approach is extra important when we consider that the jobs and fields most heavily being affected by AI are in creative fields. We’ve probably all seen memes that say “I want an AI to automate doing the dishes so that I can do art, not automate doing art so I can spend more time doing the dishes”. Universal Basic Income would be limited in alleviating social angst unless we can disrupt the pervasive devaluation of human life and effort that the AI hype machine is powering.

    Though I have ended up disagreeing with your suggestion, thanks for posing this question. It’s an interesting one to ponder, and I certainly didn’t expect to write this much when I started. I hope you find my response equally interesting.



  • You win by acknowledging that AI/Machine Learning research has existed long before this bubble existed, and is continuing to happen outside of that bubble. Most of what we call AI nowadays is based on neural networks (that’s what Geoffrey Hinton and others got a recent nobel prize for), but that’s not the only way to go about the problem, and for years now, there have been researchers pointing out problems like hallucinations and diminishing returns from increasing the amount of data you feed to a model.

    An example of one such researcher is Song Chun-Zhu, who has recently moved back to China because he was finding it increasingly difficult to do research he wanted (i.e. outside of the current AI bubble) within the US. That linked article is a bit of a puff piece, in that it is a tad too mythologising of him, but I think he’s a good example of what productive AI research looks like — especially because he used to work on the “big data” kind of AI, before realising its inherent limits and readjusting his approach accordingly.

    He’s one of the names that’s on my watch list because even for people who aren’t directly building on his research, he comes up a lot in research that is also burnt out on neural nets




  • A friend has extremely asymmetrical breasts, so a bra that fits their larger breast doesn’t fit their smaller one. They have a gel insert to put into that cup to account for this, but they also made a little pocket pouch in the same shape/size.

    A lot of pushup bras also have a little pocket for a smaller kind of gel insert. I know a couple people who find that pocket useful for hiding valuable and/or illicit things (e.g. drugs)









  • Some of the best artists I know are people who started out without a single iota of talent, but they practiced for long enough that they got good. I reckon that talent probably does exist, but it’s a far smaller component than many believe. Hard word beats talent when talent doesn’t work hard.

    People who are most likely to emphasise talent in art tend to be people who wish they were good at art, but aren’t willing (or able) to put the time into improving; it feels oddly reassuring to tell oneself that it’s pointless to try if you don’t start out with talent, rather than being realistic and saying “I wish I were good at art, but I am choosing not to invest in that skill because it’s not one of my priorities”