xiaohongshu [none/use name]

  • 0 Posts
  • 59 Comments
Joined 7 months ago
cake
Cake day: August 1st, 2024

help-circle
  • Let’s cut through the bullshit and say it for what it is:

    The Democrats won’t “resist” this time because the role of fascism is to discipline labor/voters - here, the goal is to discipline the voters who think they can just sit out of an election because of various reasons like Palestine and refuse to follow the system and vote for Kamala.

    Jeffries has already said it as clear as he could have: “what can we do? they control the senate, house and presidency” = “you didn’t vote for us”

    They’re not going to put up a fight this time, at least not until the voters have been sufficiently disciplined.

    In their view, the voters were children throwing tantrum, and so they’re gonna punish the children by letting them see for themselves “what happens when you choose to defy the rules we set and don’t want to let the Adults in the Room to be in charge. let’s see how long you’d last”.

    If enough voters are scared enough by Trump to start voting for the Democrats again, then the system is merely functioning as intended: fascism as a disciplinary tool.


  • I’m willing to bet that this plays out like the Cultural Revolution but with an American twist.

    Elon’s Bazinga Guards will be given the highest authority to smash up the old system and create disruptions all over the country, the banishing of Federal Bureaucrats and DEI Academics to the countryside. And when the turmoil was at its peak, Trump will rein in the terror and cut the music, paving the way for the return of formerly exiled Bureaucrats to take charge of the reform, and the complete transition of the ruling party into a New Republican Party with Democratic characteristics, that will go on to win in a landslide in the 2028 election.

    In other words, this is a Revolution within the American bourgeois class to resolve the internal contradictions of the American political system to prepare its transition to the next stage of Fascism.




  • This is a prelude to the privatization of federal agencies. Note that the severance payout is until September, which is when the federal budget cycle ends. If the agencies are not staffed adequately in the following cycle, then their services will deteriorate and thus making the way for private takeover. Government contracts will have to be outsourced to private sector.

    Honestly the bourgeoisie are simply using the Trump/Musk “downsizing of the government” as excuses to advance their goal of privatizing government agencies.




  • I think you have a fundamental misunderstanding of how neural network based LLMs work.

    Let’s say you give a prompt of “tell me if capitalism is a good or a bad system”, in a very simplistic sense, what it does is that it will query the words/sentences associated with the words “capitalism” and “good”, as well as “capitalism” and “bad” which it has been trained on from the entire internet’s data, and from there it spews out seemingly coherent sentences and paragraphs about why capitalism is good or bad.

    It does not have the capacity to reason or evaluate whether capitalism as an economic system itself is good or bad. These LLMs are instead very powerful statistical models that can reproduce coherent human language based on word associations.

    What is groundbreaking about the transformer architecture in natural language processing is that it can allow the network to retain the association memory for far longer than the previous iterations like LSTM, seq2seq etc could, as they would start spewing out garbled text after a few sentences or so because their architectures do not allow memory to be properly retained after a while (vanishing gradient problem). Transformer based models solved that problem and enabled reproduction of entire paragraphs and even essays of seemingly coherent human-like writings because of their strong memory retention capability. Impressive as it is, it does not understand grammatical structures or rules. Train it with a bunch of broken English texts, and it will spew out broken English.

    In other words, the output you’re getting from LLMs (“capitalism good or bad?”) are simply word association that it has been trained on from the input collected from the entire internet, not actual thinking coming from its own internal mental framework or a real-world model that could actually comprehend causality and reasoning.

    The famous case of Google AI telling people to put glue on their pizza is a good example of this. It can be traced back to a Reddit joke post. The LLM itself doesn’t understand anything, it simply reproduces what it has been trained on. Garbage in, garbage out.

    No amount of “neurosymbolic AI” is going to solve the fundamental issue of LLM not being able to understand causality. The “chain of thought” process allows researchers to tweak the model better by understanding the specific path the model arrives at its answer, but it is not remotely comparable to a human going through their thought process.



  • They’re not just automations though.

    Industrial automations are purpose-built equipments and softwares designed by experts with very specific boundaries set to ensure that tightly regulated specifications can be met - i.e., if you are designing and building a car, you better make sure that the automation doesn’t do things it’s not supposed to do.

    LLMs are general purpose language models that can be called up to spew out anything and without proper reference to their reasoning. You can technically use them to “automate” certain tasks but they are not subjected to the same kind of rules and regulations employed in the industrial setting, where tiny miscalculations can lead to consequences.

    This is not to say that they are useless and cannot aid in the work flow, but their real use cases have to be manually curated and extensively tested by experts in the field, with all the caveats of potential hallucinations that can cause severe consequences if not caught in time.

    What you’re looking for is AGI, and the current iterations of AI is the furthest you can get from an AGI that can actually reason and think.



  • I think this kind of statement needs to be more elaborate to have proper discussions about it.

    LLMs can really be summarized as “squeezing the entire internet into a black box that can be queried at will”. It has many use cases but even more potential for misuse.

    All forms of AI (artificial intelligence in the literal sense) as we know it (i.e., not artificial general intelligence or AGI) are just statistical models that do not have the capacity to think, have no ability to reason and cannot critically evaluate or verify a certain piece of information, which can equally come from legitimate source or some random Reddit post (the infamous case of Google AI telling you to put glue on your pizza can be traced back to a Reddit joke post).

    These LLM models are built by training on the entire internet’s datasets using a transformer architecture that has very good memory retention, and more recently, with reinforcement learning with human input to reduce their tendency to produce incorrect output (i.e. hallucinations). Even then, these dataset require extensive tweaking and curation and OpenAI famously employ Kenyan workers at less than $2 per hour to perform the tedious work of dataset annotation used for training.

    Are they useful if you just need to pull up a piece of information that is not critical in the real world? Yes. Is it useful if you don’t want to do your homework and just let the algorithm solve everything for you? Yes (of course, there is an entire discussion about future engineers/doctors who are “trained” by relying on these AI models and then go on to do real things in the real world without developing the capacity to think/evaluate for themselves). Would you ever trust it if your life depends on it (i.e. building a car, plane or a house, or treating an illness)? Hell no.

    A simple test case is to ask yourself if you would ever trust an AI model over a trained physician to treat your illness? A human physician has access to real-world experience that an AI will never have (no matter how much medical literature it can devour on the internet), has the capacity to think and reason and thus the ability to respond to anomalies which have never been seen before.

    An AI model needs thousands of images to learn the difference between a cat and a dog, a human child can learn that with just a few examples. Without a huge input dataset (helped annotated by an army of underpaid Kenyan workers), the accuracy is simply crap. The fundamental process of learning is very different between the two, and until we have made advances on AGI (which is as far as you could get from the current iterations of AI), we’ll always have to deal with the potential misuses of AI in our lives.


  • Moderation on Chinese social media is patchy and it’s going to depend on your luck a lot of the time. If someone doesn’t like what you have to say and believe it to be infringing on the rules, there’s not much you can do about it. Most people either self-censor or use slangs or homophone substitutes to bypass automated censorship filters.

    Discussions about sensitive topics won’t be easily translatable to English because people write in pinyin initials or homophones and you almost have to be native Chinese speaker to be able to read them.