Philosophy and Artificial Intelligence 2021
Barry Smith
MAP, USI, Lugano, Spring 2021
Schedule
Monday February 22 2021 14:30 - 17:15: Some examples of philosophical problems
Introduction to the class
What is computation?
What is a language
- The Turing Test and the problem of natural language production
What is consciousness?
What is will?
Can machines have a will?
What is intentionality?
Readings:
- John Searle: Minds, Brains, and Programs
- Jobst Landgrebe and Barry Smith: There is no Artificial General Intelligence
Tuesday February 23 2021 14:30 - 17:15 The Impossibility of Digital Immortality
Part One: Immortality
Transhumanism and Identity: Can we download the contents of our brains onto a computer and become immortal?
Why you cannot exist outside your body
Readings:
- Martine Rothblatt: Mind is Deeper Than Matter [TO BE SUPPLIED AT USI SITE]
- Scott Adams: We are living in a simulation
- AI and The Matrix
Part Two: Intelligence
The classical psychological definitions of intelligence are:
- A. the ability to adapt to new situations (applies both to humans and to animals)
- B. a very general mental capability (possessed only by humans) that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience
What are the essential marks of human intelligence?
For consideration in Wednesday's session: to what extent can artificial intelligence be achieved?
Readings:
- Linda S. Gottfredson. Mainstream Science on Intelligence. In: Intelligence 24 (1997), pp. 13–23.
Wednesday February 24, 2021 14:30 - 16:00: The Legg-Hutter Definition of 'Universal Intelligence'
(with Jobst Landgrebe)
Jobst Landgrebe is the founder and CEO of Cognotekt, GmBH, an AI company based in Cologne specialised in the design and implementation of holistic AI solutions. He has 16 years experience in AI field, 8 years as a management consultant and software architect. He has also worked as a physician and mathematician.
What is it that researchers and engineers are trying to do when they talk of achieving ‘Artificial Intelligence’?
To what extent can AI be achieved?
Problems with the Legg-Hutter Definition of Intelligence
Readings:
- Shane Legg and Marcus Hutter: Universal Intelligence: A Definition of Machine Intelligence
- Jobst Landgrebe and Barry Smith: Making AI Meaningful Again
Friday February 26 2021 16:30 - 18:00 AI Ethics
(with Jobst Landgrebe)
What is the basis of ethics as applied to humans?
- Utilitarianism
- Value ethics
On what basis should we build an AI ethics?
On why AI ethics is (a) impossible, (b) unnecessary
Readings:
- Moor: Four kinds of ethical robots
- Jobst Landgrebe and Barry Smith: No AI Ethics
- Crane: The AI Ethics Hoax
Monday May 17 2021 14:30 - 18:00 (Room A12) Some Philosophical Questions About AI
Student presentations
- Tommaso Soriani: Of (Zombie) Mice and Animats
- Maria Andromachi Kolyvaki: Statistical Learning Theory as a Framework for the Philosophy of Induction.
- Ismaele Affini: The Ugly Truth About Ourselves and Our Robot Creations: The Problem of Bias and Social Inequity
- Anita Buckley: The limits of machine intelligence
- Osama Khalil: Trolleyology: "Would you kill the fat man?"
- There Will Be No Singularity: A Survey of the Argument
- The Dreyfus case against the possibility of AGI
- Our case against the possibility of AGI
- Three Types of Impossibility: Technical, Physical, Mathematical
- Structure of the book:
- Part I: Properties of the Human Mind
- Nomological materialistic monism
- Alternative views on the mind-body problem
- Human and machine intelligence
- Primal intelligence
- Objectifying intelligence
- Definitions of intelligence in AI
- The Legg-Hutter definition (see Feb. 24)
- Defining useful machine intelligence
- What is language?
- Language and intentions
- Speech as sensorimotor activity
- Language and dialect change
- Nomological materialistic monism
- The variance and complexity of human language
- Reading: There Will Be No AGI
- Conversation and contexts
- Language production (explicit); language interpretation (implicit)
- The Turing test
- Context horizon
- Social, spatial, temporal context
- Conversation flow and interruptions
- Social and ethical behaviour
- Part I: Properties of the Human Mind
Is an AI really an intelligence?
- What sorts of problems can AI solve?
- What sorts of problems can AI not solve?
- Can we build an AI by emulating the brain?
- David Chalmers on Brain Emulation
- Can we build an AI by some other method?
- David Chalmers on Artificial Evolution
Readings:
- David J. Chalmers: The Singularity: A Philosophical Analysis
- David J. Chalmers: The Singularity: A Reply to Commentators
Tuesday May 18 2021 14:30 - 18:00 (Room A12) Language+
- An Ontology of Terrorism
- Sentiment Analysis
- An Ontology of Language
- Language+Behaviour
- Language+Violence
- Slides
Student presentations
- Rwiddhi Chakraborty: The Myth of Hypercomputation
- Amir Sulic: Why general AI will not be realized
- Brian Pulfer: The Singularity and Machine Ethics
- Peter Buttaroni: Adversarial Examples and the Deeper Riddle of Induction
Wednesday May 19 2021 14:30 - 18:00 (Room A21) First Dialogue with Jobst Landgrebe
- AI and the Mathematics of Complex Systems
- Preliminary Slides
Thursday May 20 2021 12:30 - 16:00 (Room A12) Second Dialogue with Jobst Landgrebe
- AI and the Ontology of Power
- Preliminary Video
Friday May 21 2021 12:30 - 14:00 (Room A12) Concluding Survey
Student Presentations
- Giacomo De Colle: Mind Embodied and Embedded
- Rocco Felici: On Black Box Models in AI Ethics
- Julius Schulte: Explainable AI: How Disciplines Talk Past Each Other
- Gabriel Carraretto: Backpropagation and the Brain
- Michelle Damian: Performance vs. Competence in Human–Machine Comparisons
Course Description
Artificial Intelligence (AI) is the subfield of Computer Science devoted to developing programs that enable computers to display behavior that can (broadly) be characterised as intelligent. On the strong version, the ultimate goal of AI is to create an artificial system that is as intelligent as a human being. Recent striking successes such as AlphaGo have convinced many not only that this objective is obtainable but also that in a not too distant future machines will become even more intelligent than human beings.
The actual and possible developments in AI open up a series of striking questions such as:
- Can a computer have a conscious mind?
- Can it have desires and emotions?
- Would machine intelligence, if there is such a thing, be something comparable to human intelligence or something quite different?
In addition, these developments make it possible for us to consider a series of philosophical questions in a new light, including:
- What is personal identity? Could a machine have something like a personal identity? Would I really survive if the contents of my brain were uploaded to the cloud?
- What is it for a human to behave in an ethical manner? (Could there be something like machine ethics? Could machines used in fighting wars be programmed to behave ethically?)
- What is a meaningful life? If routine, meaningless work in the future is performed entirely by machines, will this make possible new sorts of meaningful lives on the part of humans?
After introducing the relevant ideas and tools from both AI and philosophy, all the aforementioned questions will be thoroughly addressed in class discussions following lectures by Drs Facchini and Smith and presentations of relevant papers by the students.
Further Background Reading
- Max More and Natasha Vita-More (Eds.), The Transhumanist Reader: Classical and Contemporary Essays on the Science, Technology, and Philosophy of the Human Future, Wiley-Blackwell, 2013.