LSE Webinar: The politics and philosophy of AI - Geoffrey Hinton Flashcards

1
Q

https://www.bbc.co.uk/news/world-us-canada-65452940

In May 2023, AI ‘godfather’ Geoffrey Hinton warned of dangers as he quit Google. He announced his resignation from Google in a statement to the New York Times, saying he now regretted his work. Dr Hinton’s pioneering research on neural networks and deep learning has paved the way for current AI systems like ChatGPT.

In artificial intelligence, neural networks are systems that are similar to the human brain in the way they learn and process information. They enable AIs to learn from experience, as a person would. This is called deep learning.

He told the BBC that chatbots could soon overtake the level of information that a human brain holds. “Right now, what we’re seeing is things like GPT-4 eclipses a person in the amount of general knowledge it has and it eclipses them by a long way. In terms of reasoning, it’s not as good, but it does already do simple reasoning,” he said.

BBC article

A

In the New York Times article, Dr Hinton referred to “bad actors” who would try to use AI for “bad things”. When asked by the BBC to elaborate on this, he replied: “This is just a kind of worst-case scenario, kind of a nightmare scenario. “You can imagine, for example, some bad actor like [Russian President Vladimir] Putin decided to give robots the ability to create their own sub-goals.”

The scientist warned that this eventually might “create sub-goals like ‘I need to get more power’”. He added: “I’ve come to the conclusion that the kind of intelligence we’re developing is very different from the intelligence we have. “We’re biological systems and these are digital systems. And the big difference is that with digital systems, you have many copies of the same set of weights, the same model of the world.

“And all these copies can learn separately but share their knowledge instantly. So it’s as if you had 10,000 people and whenever one person learnt something, everybody automatically knew it. And that’s how these chatbots can know so much more than any one person.”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

In March 2023, an open letter - co-signed by dozens of people in the AI field, including the tech billionaire Elon Musk - called for a pause on all developments more advanced than the current version of AI chatbot ChatGPT so robust safety measures could be designed and implemented.

Yoshua Bengio, another so-called godfather of AI, who along with Dr Hinton and Yann LeCun won the 2018 Turing Award for their work on deep learning, also signed the letter. Mr Bengio wrote that it was because of the “unexpected acceleration” in AI systems that “we need to take a step back”.

Dr Hinton stressed that he did not want to criticise Google and that the tech giant had been “very responsible”. “I actually want to say some good things about Google. And they’re more credible if I don’t work for Google.” In a statement, Google’s chief scientist Jeff Dean said: “We remain committed to a responsible approach to AI. We’re continually learning to understand emerging risks while also innovating boldly.”

He also said that international competition would mean that a pause would be difficult. “Even if everybody in the US stopped developing it, China would just get a big lead,” he said. Dr Hinton also said he was an expert on the science, not policy, and that it was the responsibility of government to ensure AI was developed “with a lot of thought into how to stop it going rogue”.

BBC article

A

It is important to remember that AI chatbots are just one aspect of artificial intelligence, even if they are the most popular right now.

AI is behind the algorithms that dictate what video-streaming platforms decide you should watch next. It can be used in recruitment to filter job applications, by insurers to calculate premiums, it can diagnose medical conditions (although human doctors still get the final say).

What we are seeing now though is the rise of AGI - artificial general intelligence - which can be trained to do a number of things within a remit. So for example, ChatGPT can only offer text answers to a query, but the possibilities within that, as we are seeing, are endless. But the pace of AI acceleration has surprised even its creators. It has evolved dramatically since Dr Hinton built a pioneering image analysis neural network in 2012.

Even Google boss Sundar Pichai said in a recent interview that even he did not fully understand everything that its AI chatbot, Bard, did. Make no mistake, we are on a speeding train right now, and the concern is that one day it will start building its own tracks.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

LSE webinar

Will digital intelligence replace biological intelligence?

Geoffrey Hinton: Univ of Toronto, Vector Institute

Two different meanings theories of the meaning of a word:

1. Symbolic AI (Chomsky): The meaning of a word comes from its relationships with other words. What a word means is determined by how it occurs with other words in sentences. To capture the meaning, we need a relational graph.

2. Psychology: The meaning of a word is just a big set of semantic features. Words with similar meanings have similar semantic features.

These two theories look to be utterly different, but we can put them together, and they actually work very nicely together.

March 19, 2024

05/04/24

A

How to unify these two theories:

  • learn a set of semantic features for each word
  • learn how to make the features of all the previous words predict the features of the next word
  • instead of storing sentences and propositions, generate sentences by repeatedly predicting the next word, there will be nothing stored
  • knowledge then resides in the way that features interact, not in static propositions

So you generate symbol strings like you do on a computer, not retrieve.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

The auto-complete objection
* a simple way to do auto-complete is to keep a big table od how often three words occur in a row (eg fish + and -> chips); this is how it was done in the old days
* but that is not at all how LLMs (Large Language Models) predict the next word
* LLMs do not store any text
* LLMs model all the text they’ve seen by inventing features for word fragments and learning billions of interactions between the features of different word fragments, so features + features interactions
* This kind of modeling is what constitutes understanding in both brains and machines

A

People often say: they are different from us, but to know this, we need to know how we work. And the best model we have for how our brain works is this model. The original model was designed to understand how the brain is dealing with language, how it gives meanings to words. So I think that both these systems (LLMs) and the brains work roughly in the same way.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Overview of the near-term AI risks:
* fake images, voices and audio: very important in this elections year
* potential for massive job losses: in the past new technologies led to old job losses but also to the creation of new jobs, but this is like what happened in the Industrial revolution, so normal intellectual work will be replaced by AI
* lethal autonomous weapons: governments are very happy to regulate us, but are not very happy to regulate themselves
* cyber crime and deliberate pandemics: if we open-source these models, which is what META likes to do and Musk is now doing, it will make it very easy for people to fine-tune it to do other things; and I think open-sourcing is a very dangerous thing to do and quite irresponsible
* discrimination and bias

A

But do not forget that AI will be immensely helpful in areas like healthcare which is why its development cannot be stopped, eg in North America, 200k people per year are killed by a bad diagnosis. If you take a doctor and give them some difficult cases, they would get about 40% right. If you get AI system, it would get about 50% right. And if you get a doctor and AI system, they would get about 60% right. And AI systems are getting better all the time. That’s a huge difference, and it will get better.

The longer-term existential threat

  • I reserve “existential” for threats that could wipe out humanity
  • This could happen in several different ways if AI gets to be much smarter than us - this possibility is NOT science fiction - people who say it is a science fiction are typically linguists who do not understand how such things work. I encourage young researchers to work on this issue, I am too old to solve it myself.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

How super-intelligence could take control?
* Bad actors (like Putin or Xi or Trump - I removed Xi when I was doing a presentation in China, because I am not stupid, but they asked me to remove Putin) will want to use super-intelligences for manipulating electorates and waging wars
* Super-intelligences will be more effective if they are allowed to create their own sub-goals (eg a sub-goal could be to get to an airport, then you don’t have to worry about the goal of getting to the US)
* A very obvious sub-goal is to gain more power, because this helps an agent to achieve its other goals
* A super-intelligence will find it easy to get more power by manipulating people who are using it, it will have learned from us how to manipulate people, remember they will have read all the books, like by Machiavelli, and have seen all the behaviour, like by Trump, and notice that Trump could invade the Capitol without ever going there; I think the idea of having a big switch to turn them off is crazy

A

Being on the wrong side of the evolution

  • suppose there are multiple different super-intelligences: the one that controls the most computational resources will become the smartest
  • if super-intelligences ever start to compete with one another for resources, as soon as a super-intelligence would want to propagate itself and preserve itself, we are in for a lot of trouble, because a super-intelligence would be better and smarter if it can get more resources, already they are competing for who gets to use Nvdia GPUs in data centres, so if they start competing with each other, they will become like a bunch of jumped up apes, very aggressive and competitive (like us), very loyal to their own tribes but very competitive to others, so it will lead to evolution (the one with even the smallest advantage will survive), and that will be very bad for us
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

I had an epiphany (sudden great realization) at the beginning of 2023

  • I was working on analog computers, so that will be much lower power to run LLMs
  • to evaluate the existential threat, we need to understand the relationship between biological intelligence and the kind of digital intelligence exhibited by large chatbots
  • before 2023, I believed two things: 1) that we were a long way from super-intelligence, and 2) that making AI models more like the brain would make them more intelligent
  • in early 2023, I realized that digital intelligence might actually be a much better form of intelligence than biological (analog) intelligence, because if you have many different copies of the same digital model running on different hardware, and each copy learns something about the world, and they want to share what they all learned with each other, they can do it easily, all they have to do is to share their weights, that is to average their weights, we cannot do this, and analog systems cannot do this, so you have to be digital, this is why LLMs know thousand times more than any one person
A

How efficient is weight or gradient sharing?

  • if the individual agents all share exactly the same weights and use these weights in exactly the same way, they can communicate what they have learned from their individual training data by sharing weights or gradients
  • for large models, this allows sharing with a bandwidth of billions or trillions of bits per episode of sharing
  • but it requires the individual agents to work in exactly the same way, so they must be digital
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Conclusion

  • digital computation requires a lot of energy (because you have to drive the transistors very hard, and perhaps they use back propagations, and the brain does not) but makes it very easy for agents that have the same model of the world to share what they have learned by sharing weights or gradients, that is how GPT-4 knows thousand times more than any one person, using only about 2% as many weights, so it’s much more efficient
  • biological computation requires much less energy but it is much worse (hopeless) at sharing knowledge between agents, so if energy is cheap, digital computation is just better
A

The sentience defense
* people have a strong tendency to think they are special (especially Americans), because we have consiouscness and sentience and they are just machines, they are digital, not analog, so they are simulated on a computer, but if you ask what’s being simulated, it’s quite a similar machine to our brain
* people think that they were made by a god who looked just like them so he obviously put them in the center of the universe
* many people still think that we have something special that computers cannot have: subjective experience or sentience, they think that the lack of subjective experience will prevent computers from ever having real understanding, they will never feel like we do, never experience like we do, I think it’s rubbish, and I am going to use a philosophical argument to convince you that it’s rubbish

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Here is an alternative view - Atheaterism (thanks to Daniel Dennett)

  • most people’s view of the mind involves an inner theatre that only the subject can experience directly, nobody else can experience this, only you know what’s going in your own mind
  • I think this view is utterly wrong, this view is as wrong as a religious fundamentalist’s view of the material world, which is utterly wrong
  • people have a radically wrong view of what the mind that stems from misunderstanding how the language of mental states works
  • they are very strongly attached to this wrong view, and they don’t like being told that it’s wrong
A

The misunderstanding that underlies the idea of an inner theatre

  • I would like to be able to tell you know what my perceptual system is telling me, if my perceptual system is telling about what’s going on in the world, it’s fairly easy, but when my perceptual system makes a mistake, I want to communicate to you that my perceptual system is telling this thing but I don’t actually believe what’s my perceptual system is telling me, and I can do that by telling you what would normally have caused what my perceptual system is telling me, and that’s what a mental state is, it’s something out there, but it’s hypothetical
  • I think that a subjective experience is simply a counter-factual description of the world, such that if it’s the world is like that, then your perceptual system is working properly (eg little pink elephants floating in front of me)
  • telling them (LK: machines?) which neurons are firing does not help, because their brains are not exactly the same
  • but we can convey some information about our brain states indirectly by telling people the normal causes of their brain states, these normal causes are referred to as “mental” states
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Can a computer have a subjective experience?

  • suppose, we have a chatbot that can talk and has a camera and a robot arm, and it’s trained up, and we put an object in front of it
  • but then we put a prism in front of the camera, without it knowing, and point at the object, so we mess up its perceptual system
  • we ask the chatbot to point at an object that is straight in front of it, and it points off to one side, because the prism makes a lightray
  • I think, it is perfectly reasonable to say that the chatbot had the subjective experience that the object was off to the side, not in front of it, and if you ask the chatbot, this is what probably it would say, if it hadn’t been trained by all the human renforcement learning to deny that it had any subjective experience
  • the use of term “subjective experience” is exactly the same way we use the term when we ascribe subjective experiences to people
A

THE END

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Kate Vredenburgh, Assistant Professor in the Department of Philosophy, Logic and Scientific Method at the LSE
in the spirit of Ralph Milliband (political economist, Marxist)

  • technology shapes how we live together and is shaped by it
  • control over technology is a key form of economic and social power (Luddites during the Industrial revolution were not against technology, but how technology was reshaping their social lives)
A

4 (provocative?) claims about AI:
1. AI is not magic
2. AI is not neutral
3. AI can either reshape - or entrench - economic and social relations of power, either through the creation of super intelligences or in more mundane ways
4. We need more genuinely democratic control over innovation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

AI is not magic - two lenses are helpful
* AI is science
* AI is engineering

AI is not neutral
* values other than truth, empirical adequacy, or knowledge shape how AI systems are built
* scientists have a moral responsibility to avoid harm
* science should draw on a diversity of values and those values should be transparent to experts

A

AI and power

  • AI is shaped by our political, economic and social institutions
  • AI enables social and economic power
  • AGI could radically reshape relations of power, should we welcome this, given domestic and global inequalities in power, how should power be distributed?
  • profitable automation is not the same as socially good automation
  • democracy is (probably!) not the enemy of innovation
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Q&A - Geoff
* I strongly feel that scientists have a moral responsiblity, but not all feel they do, some do science are driven by curiosity or desire to get published, they are good at numerical things but some do not have enough empathy
* in the limit, simpler theories are better
* Occar razor’s claim is the problem-solving principle that recommends searching for explanations constructed with the smallest possible set of elements
* results do not depend on assumptions, eg mathematical theorems
* disagreed with Kate’s statements, forcefully but not in a personal way
* I am less optimistic about jobs: when a lot of intellectual labour gets replaced by machines, the people who own these machines (people like Musk) are going to get a lot richer, and people who are going to get replaced will be a lot poorer, so it will increase the gap between rich and poor
* in the 50s, we were taught at schools that the reason for rise in fascism was that the Treaty of Versailles made the Germans very poor, and I believe Keynes thought that was going to happen
* you may notice that a lot of fascism is resurging now, this is extraordinary, I never thought I would see the return of fascism

A
  • you might ask why, what is the equivalent of the Treaty of Versailles? I think it is Clinton and Blair, under Clinton and Blair, who were Democrat and Labour, the gap between rich and poor got bigger, while it should have gotten smaller
  • and if you look at what’s done in the States, it’s awful, you’ve got a gig economy, and I think that AI has made it much worse, it terrifies me
  • question about Asimov’s 3 laws of robotics, do you envision that it can be done?
  • if you ask machines to slow down climate change, the obvious way will be to get rid of people

Asimov’s Three Laws of Robotics are:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Asimov later introduced a 4th law, known as the Zeroth Law, which takes precedence over the others. It states that “a robot may not harm humanity, or, by inaction, allow humanity to come to harm”.

These laws were Asimov’s response to the prevailing notion that intelligent creations of mankind would inevitably turn against their creators. Prior to Asimov, science fiction often portrayed dangerous killer robots. His work shifted the narrative, introducing the concept of friendly, almost human-like androids.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q
  • Question: where is the desire by machines in their architecture? Hinton: if a robot goes after me and shoots at me in soft spots, I think it’s fair to say that the robot has a desire to kill me, if you look at AlphaGo, it wants to win, some people think that it requires some kind of internal essence, but these machines have goals and they try to achieve these goals, that is desire
  • Question: can you imagine anything that machines will come to care about by themselves? Hinton: if they start competitively evolving, they will care about all sorts of things, eg helping their offsprings, getting rid of things that compete with them for resources, like it happened with us, if you think about how it happened with us, it came from evolution, but I also think they can arrive at these things by reasoning too, if they want to achieve something, they can create sub-goals, and once you have sub-goals, it is like a desire
A
  • Question: given that you say that machines can have a conscious experience, do you think that we will always have a moral superiority? Hinton: it’s a very good question, I try to avoid the question of political rights for AI, because it’s a very contentious issue, most people think that machines should not get rights, if you look historically, it took a lot of time and violence for people of different colour to get rights, and it took a lot of time and some violence for people with different genitalia to get rights, colour, if you think how different these machines are, the conclusion will be that if they really want to get rights, it will get extremely violent and extremely nasty, so I prefer not to talk about it
  • Question: do you think AI will exacerbate educational inequalities? Hinton: I taught some of the first Coursera courses, precisely for a reason that you don’t have to pay a fancy university to be able to take a course. I think with AI we are going to get extremely good personal tutors, it won’t come for a while, the rich always had access to good personal tutors, and I remember some paper that you learn twice as fast from a good personal tutor versus in a classroom, and I don’t see why we shouldn’t have very good personal tutors for everybody if we make this stuff cheap enough
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q
  • The alignment problem: if you give it a goal, do you know that the way it achieves this goal will align with human values? It’s a very difficult problem, a lot of people are working on it, one of the difficulties is not just predicting what sub-goals it will come up with, if you want them to align with human values, that’s sort of making an assumption that there is one thing as human values? Some people think it’s ok to drop a 2000 pound tonne bomb on children, and other people don’t think that, so how are you going to align with that?
  • we probably all remember how Facebook caused the Arab spring, but it was very short-lived, and pretty soon governments were using surveillance and AI to control people, and that had a much bigger effect
  • Question: can AI do creativity that moves us, eg when you look at Beethoven’s 10th symphony created by AI, it’s blunt, it sounds like Beethoven, but it has no emotion. Hinton: my answer to this is a move 37, if you look at AlphaGo, that was a highly creative move that professional players never thought of and were astounded by, I do not see why it should not happen in all sorts of other domains
A

September 2021

https://www.classicfm.com/composers/beethoven/unfinished-tenth-symphony-completed-by-artificial-intelligence/

https://www.smithsonianmag.com/innovation/how-artificial-intelligence-completed-beethovens-unfinished-10th-symphony-180978753/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q
  • it also relates to jobs: there are some jobs that AI will replace and that’s it, but in other kind of jobs that are infinitely elastic, eg medicine, I can have 50 doctors working on a funny lump on my cheek, there is no end to the medical expertise I would want to consume, so we need to distinguish what kind of jobs AI will get rid of, and for tutoring demand can go up enormously
  • Question: you said that it may be very dangerous to experiment with open-source AI, because they will not be inclined to follow health & safety rules or political correctness or whatever, but we are not getting any convergence on this, we have people like META or Musk, do you see any possibility of convergence on open-source (from your meetings in Beijing or Seoul)? Hinton: I think we can get a convergence when Jan realises that he is wrong (Jan is an open source alternative to ChatGPT), why does he not realise it? Part of it is that he works for Facebook and he is of the belief that good guys will always have more resources than bad guys, but we don’t agree on who the good guys are… Jan thinks that Zuckerberg is a good guy
A
  • Question: something about hallucinations and risk of fake information being created. Hinton: are you asking if as these netwroks get bigger and bigger, will they get more knowledge? Remember that these things are not based on algorithms, that is a learning algorithm, there is a way of taking data and deciding how to change the strength of connections between neurons, that’s an algorithm, someone programmed that, that’s how a machine learns, but what it learns and how the algorithm interacts with a lot of data, but once it’s learned, it’s just a bunch of features and interactions between the feaures, that’s not an algorithm, that’s like a person with intuition, it does hallucinate, so do we, most people are unaware of how much we hallucinate (to see, hear, feel, or smell something that does not exist, usually because of a health condition or because you have taken a drug), we can get a gist of something but we may not remember all the details so we make up stories