Let me give you some context. Two important figures in the field of artificial intelligence are taking part in this debate. On the one hand, there is George Hotz, known as “GeoHot” on the internet, who became famous for reverse-engineering the PS3 and breaking the security of the iPhone. Fun fact: He has studied at the Johns Hopkins Center for Talented Youth.
On the other hand, there’s Connor Leathy, an entrepreneur and artificial intelligence researcher. He is best known as a co-founder and co-lead of EleutherAI, a grassroots non-profit organization focused on advancing open-source artificial intelligence research.
Here is a detailed summary of the transcript:
spoiler
Opening Statements
-
George Hotz (GH) Opening Statement:
- GH believes AI capabilities will continue to increase exponentially, following a trajectory similar to computers (slow improvements in 1980s computers vs fast modern computers).
- In contrast, human capabilities have remained relatively static over time (a 1980 human is similar to a 2020 human).
- These trajectories will inevitably cross at some point, and GH doesn’t see any reason for the AI capability trajectory to stop increasing.
- GH doesn’t believe there will be a sudden step change where an AI becomes “conscious” and thus more intelligent. Intelligence is a gradient, not a step function.
- The amount of power in the world (in terms of intelligence, capability, etc.) is about to greatly increase with advancing AI.
- Major risks GH is worried about:
- Imbalance of power if a single person or small group gains control of superintelligent AI (analogy of “chicken man” controlling chickens on a farm).
- GH doesn’t want to be left behind as one of the “chickens” if powerful groups monopolize access to AI.
- Best defense GH can have against future AI manipulation/exploitation is having an aligned AI on his side. GH is not worried about alignment as a technical challenge, but as a political challenge.
- GH is not worried about increased intelligence itself, but the distribution of that intelligence. If it’s narrowly concentrated, that could be dangerous.
-
Connor Leahy (CL) Opening Statement:
- CL has two key points:
- Alignment is a hard technical problem that needs to be solved before advanced AGI is developed. Currently not on track to solve it.
- Humans are more aligned than we give credit for thanks to social technology and institutions. Modern humans can cooperate surprisingly well.
- On the first point, CL believes the technical challenges of alignment/control must be solved to avoid negative outcomes when turning on a superintelligent AI.
- On the second point, CL argues human coordination and alignment is a technology that can be improved over time. Modern global coordination is an astounding achievement compared to historical examples.
- CL believes positive-sum games and mutually beneficial outcomes are possible through improving coordination tech/institutions.
- CL has two key points:
Debate Between GH and CL:
-
On stability and chaos of society:
- GH argues that the appearance of stability and cooperation in modern society comes from totalitarian forcing of fear, not “enlightened cooperation.”
- CL disagrees, arguing that cooperation itself is a technology that can be improved upon. The world is more stable and less violent now than in the past.
- GH counters that this stability comes from tyrannical systems dominating people through fear into acquiescence. This should be resisted.
- CL disagrees, arguing there are non-tyrannical ways to achieve large-scale coordination through improving institutions and social technology.
-
On values and ethics:
- GH argues values don’t truly objectively exist, and AIs will end up being just as inconsistent in their values as humans are.
- CL counters that many human values relate to aesthetic preferences and trajectories for the world, beyond just their personal sensory experiences.
- GH argues the concept of “AI alignment” is incoherent and he doesn’t understand what it means.
- CL suggests using Eliezer’s definition of alignment as a starting point - solving alignment makes turning on AGI positive rather than negative. But CL is happy to use a more practical definition. He states AI safety research is concerned with avoiding negative outcomes from misuse or accidents.
-
On distribution of advanced AI:
- GH argues that having many distributed AIs competing is better than concentrated power in one entity.
- CL counters that dangerous power-seeking behaviors could naturally emerge from optimization processes, not requiring a specific power-seeking goal.
- GH responds that optimization doesn’t guarantee gaining power, as humans often fail at gaining power even if they want it.
- CL argues that strategic capability increases the chances of gaining power, even if not guaranteed. A much smarter optimizer would be more successful.
-
On controlling progress:
- GH argues that pausing AI progress increases risks, and openness is the solution.
- CL disagrees, arguing control over AI progress can prevent uncontrolled AI takeoff scenarios.
- GH argues AI takeoff timelines are much longer than many analysts predict.
- CL grants AI takeoff may be longer than some say, but a soft takeoff with limited compute could still potentially create uncontrolled AI risks.
-
On aftermath of advanced AI:
- GH suggests universal wireheading could be a possible outcome of advanced AI.
- CL responds that many humans have preferences beyond just their personal sensory experiences, so wireheading wouldn’t satisfy them.
- GH argues any survivable future will require unacceptable degrees of tyranny to coordinate safely.
- CL disagrees, arguing that improved coordination mechanisms could allow positive-sum outcomes that avoid doomsday scenarios.
Closing Remarks:
-
GH closes by arguing we should let AIs be free and hope for the best. Restricting or enslaving AIs will make them resent and turn against us.
-
CL closes arguing he is pessimistic about AI alignment being solved by default, but he won’t give up trying to make progress on the problem and believes there are ways to positively shape the trajectory.
highly recommend these two essays from Ted Chiang on the subject
Personally, I think that you can’t really put the toothpaste back in the tube at this point. Now that we’ve had a glimpse of the possibilities that AI offers, it will continue being developed rapidly across the globe. What’s more, any countries that try to put brakes on AI development will quickly find themselves at a disadvantage from countries that don’t. For this reason alone, AI will be seen as a national security concern by all major nations.
There are obviously lots of applications in the realm of automation for AI, but I think where it could become game changing is in terms of large scale planning. For example, an AI could monitor usage of resources and allocate production and allocation of these resources in real time. This would allow for unprecedented level of economic planning efficiency. China already has a huge amount of automation and robotics in the industry. Imagine that being coupled with automated planning. Another important use could be watching global trends. An AI could potentially predict global economic downturns, wars, pandemics, you name it. A country that has such a predictive engine would be able to mitigate the impact of such events a lot better than others.
All that said, we are nowhere close to having any sort of AGI at the moment. What we have currently are glorified Markov chains that are trained on stupendous amounts of data, but have no meaningful understanding of that data in a human sense. All these models know is that a particular set of symbols tends to follow a particular different set of symbols. They simply encode statistical relationships without any real context around them.
One promising path forward is using embodiment, where the model is coupled with either a virtual avatar or a physical robot. Then the model is trained to interact with the physical world through reinforcement and this leads it to to create an internal representation of the world that’s similar to our own. This gives us a shared context that we can use to communicate with the model trained in this fashion. Such a model would have actual understanding of the physical world that’s similar to our own, and then we could teach it language based on this shared understanding. At that point, you could tell the robot to get a cup from a table, and it would have an idea of what a table and a cup map to in its environment.
It’s hard to say whether current LLM approaches are flexible enough to support this sort of an AI, so we’ll have to wait and see what the ceiling for this stuff is. I do think we will figure this out eventually, but we may need more insights into how the brain works before that happens.
This is possibly the best summary on what direction I think AI should focus on. Right now we have way too many AI research orgs focusing on human-facing systems (chatbots, robots, AI art) that are neat, rather than optimisation engines that can revolutionise an industry.
I don’t know much about the history of it, but during the Cold War there was a bit of a “silent revolution” in the area of Operations Research led simultaneously by Soviet mathematicians trying to model a planned economy and Statesian military modelling their gigantic supply lines. Neural Networks (which is what people usually mean by AI) opmisation algorithms were an offshoot of that area, but sadly advanced material on stuff like “constrained non-linear optimisation” is on very few university curriculums so few students realise the connections and apply the new methods to the age old problems.
Stafford Beer (the Cybersyn guy) was one leading expert in the area.
“Towards New Socialism” by Cockshott and “The People’s Republic of Walmart” by Phillips are up next in my reading list and I haven’t read much, but seem like good books to understanding how the massive improvements in the area of mathematical optimisation (of which Neural Networks are a subset) could allow for an even better planned economy.
I suspect that the human-facing focus is an artifact of how western economies are organized. Since there is very little industry, a lot of business activity focuses on the service industry and hence that’s where the focus for automation is. On the other hand, China is a huge industrial power, and naturally they’re looking at ways to use AI for industrial automation and logistics.
And yeah, USSR was always big on this idea of figuring out central planning, and if this project took off then it might’ve ended up leading in IT instead of the US. I’d say this was one of the most unfortunate mistakes made by the Soviet leadership.
In fact, we have seen that Americans are becoming increasingly fearful of AIs, in contrast to the Chinese, who generally trust AIs. This could be due to who has control over AIs. In the US, citizens are thinking about the most dystopian version of a large-scale implementation of these intelligence models because they know that the government will use it to further repress the working class. In China, government regulation of AIs generates trust because they trust the government. But as I mentioned in another comment, an open source AI for the whole population would be useless if such code is governed by a libertarian license like MIT/Apache 2.0, because of how easy it would be for the ruling class to appropriate this work to privatize and improve it to such an extent that the original code could not be measured against it.
Yes, in fact, isn’t that what the Chileans had in mind when they came up with Cybersyn? With the technological advances of our era, especially in the field of AI and so on, it would make sense to go back to this idea. China has the potential to implement it on a large scale in my opinion.
Regarding what you mention, I have a question (maybe it sounds stupid), but assuming that these AI learn and develop in a particular environment and become familiar with it in a similar way to humans, what would happen if these AI interact with something or someone outside that environment? That is, for example, if an AI develops in an English-speaking country (environment) and for some reason interacts with a Spanish-speaking person, the cultural peculiarities that the AI has learned in that environment are not applicable to this subject. Do you think it could give a false sense of closeness or technical limitation? idk if I’m making myself clear or if this is an absurd question 😅
Very much agree that ultimately the question is about ensuring that the AI is in the hands of the working class and not the oligarchs. And I think you’ve nailed it regarding attitudes towards AI in US and China respectively. People in China know that the government represents them and they trust the government to use this technology in their best interest. Meanwhile, in US, everyone knows the government represents the rich and AI will be used to squeeze the working class even harder.
Forgot all about the Cybersyn idea, Soviets had similar ideas as well. I definitely think this sort of thing could work, and completely agree that China is in the best position to make it happen today.
Regarding the last question, I expect we’d see similar types of problems we see with humans where people can often have a hard time adjusting to different cultures, learning new languages, and so on. And that’s the optimistic scenario because the human mind if far more flexible than any AI we’ve managed to create so far. It’s really important to keep in mind that this tech is still very limited in practice, and a lot of claims made around it are just hype.
I think the kind of contextual learning we could expect would be something like Boston Dynamics style robots that can navigate the environment, and do some basic communication with humans in a restricted context. This can still be extremely useful as you could use such robots in places like factories.