Philosophy and Artificial Intelligence 2025: Difference between revisions
| (20 intermediate revisions by the same user not shown) | |||
| Line 24: | Line 24: | ||
Some of the material for this class is derived from our book | Some of the material for this class is derived from our book | ||
:''[https://buffalo | :''[https://buffalo.box.com/v/Why-Machines Why Machines Will Never Rule the World: Artificial Intelligence without Fear]'' (1st Edition, Routledge 2022). | ||
and from the companion volume | and from the companion volume | ||
| Line 50: | Line 50: | ||
'''Part 1: An introduction to the course: Impact of philosophy on AI; impact of AI on philosophy''' | '''Part 1: An introduction to the course: Impact of philosophy on AI; impact of AI on philosophy''' | ||
:[https://buffalo.box.com/v/ | :[https://buffalo.box.com/v/Living-in-a-Simulation-JL Slides] | ||
:[https://buffalo.box.com/v/ | :[https://buffalo.box.com/v/Introduction-2025 Video] | ||
:[https://buffalo.app.box.com/v/AI-Without-Fear Why Machines Will Never Rule the World] | :[https://buffalo.app.box.com/v/AI-Without-Fear Why Machines Will Never Rule the World] | ||
| Line 71: | Line 71: | ||
:Linda S. Gottfredson. [https://www1.udel.edu/educ/gottfredson/reprints/1994WSJmainstream.pdf Mainstream Science on Intelligence]. In: ''Intelligence'' 24 (1997), pp. 13–23. | :Linda S. Gottfredson. [https://www1.udel.edu/educ/gottfredson/reprints/1994WSJmainstream.pdf Mainstream Science on Intelligence]. In: ''Intelligence'' 24 (1997), pp. 13–23. | ||
:Jobst Landgrebe and Barry Smith: [https://arxiv.org/pdf/1906.05833.pdf There is no Artificial General Intelligence] | :Jobst Landgrebe and Barry Smith: [https://arxiv.org/pdf/1906.05833.pdf There is no Artificial General Intelligence] | ||
:Jobst Landgrebe: [https://buffalo.box.com/v/Deep-reasoning Deep reasoning, abstraction and planning] | |||
'''Background''': Ersatz Definitions, Anthropomorphisms, and Pareidolia | '''Background''': Ersatz Definitions, Anthropomorphisms, and Pareidolia | ||
| Line 79: | Line 80: | ||
::3. If you can't spot irony, you're not intelligent | ::3. If you can't spot irony, you're not intelligent | ||
==Tuesday February 18 (09:30-12:15) Limits of AI? == | ==Tuesday February 18 (09:30-12:15) Limits and Dangers of AI? == | ||
[https://buffalo.box.com/v/Landgrebe-AI-Feb-2025 Video] | [https://buffalo.box.com/v/Landgrebe-AI-Feb-2025 Video] | ||
| Line 91: | Line 92: | ||
3. Shows why AI cannot model complex systems adequately and synoptically, and why they therefore cannot reach a level of intelligence equal to that of human beings. | 3. Shows why AI cannot model complex systems adequately and synoptically, and why they therefore cannot reach a level of intelligence equal to that of human beings. | ||
Background: '''[https://www.youtube.com/watch?v=1rnam1w8ztM Will AI Destroy Humanity? A Soho Forum Debate]''' (Spoiler: Jobst won) | Background: | ||
'''[https://www.youtube.com/watch?v=1rnam1w8ztM Will AI Destroy Humanity? A Soho Forum Debate]''' (Spoiler: Jobst won) | |||
R.V. Yampolskii, ''[https://www.amazon.com/Unexplainable-Unpredictable-Uncontrollable-Artificial-Intelligence/dp/103257626X?asin=103257626X&revisionId=&format=4&depth=1 AI: Unexplainable, Unpredictable, Uncontrollable]'' | |||
Arvind Narayanan and Sayash Kapoor, ''[https://press.princeton.edu/books/hardcover/9780691249131/ai-snake-oil AI Snake Oil]'' | |||
Arnold Schelsky, ''[https://buffalo.box.com/v/TheHypeCycle The Hype Book]'', especially Chapter 1. | |||
==Wednesday, February 19 (13:30 - 16:15) Transhumanism and digital immortality== | ==Wednesday, February 19 (13:30 - 16:15) Transhumanism and digital immortality== | ||
| Line 110: | Line 119: | ||
==Thursday, February 20 (9:30 - 12:15) Can a machine be conscious?== | ==Thursday, February 20 (9:30 - 12:15) Can a machine be conscious?== | ||
'''The machine will''' | '''The machine will''' | ||
| Line 133: | Line 133: | ||
:Fermi's paradox is solved | :Fermi's paradox is solved | ||
==Monday, April 28 (14:30 - 17:30) == | Background: | ||
[https://buffalo.box.com/v/BS-Lugano-Machine-Will Slides] | |||
[https://buffalo.box.com/v/Machine-Consciousness-BS-2025 Video] | |||
:[https://www.youtube.com/watch?v=tt-JzB50sJE Searle's Chinese Room Argument] | |||
Machines cannot have intentionality; they cannot have experiences which are ''about'' something. | |||
:Searle: [https://www.law.upenn.edu/live/files/3413-searle-j-minds-brains-and-programs-1980.pdf Minds, Brains, and Programs] | |||
==Monday, April 28 (14:30 - 17:30) Are we living in a simulation == | |||
'''[https://buffalo.box.com/v/BS-Intelligence-Lugano-2025 Are we living in a simulation?]''' | '''[https://buffalo.box.com/v/BS-Intelligence-Lugano-2025 Are we living in a simulation?]''' | ||
| Line 143: | Line 155: | ||
Bostrom's Simulation Argument | Bostrom's Simulation Argument | ||
David Chalmers' ' | Background | ||
David Chalmers, ''[https://www.amazon.com/Reality-Virtual-Worlds-Problems-Philosophy/dp/0393635805 Reality+]'' | |||
[https://www.youtube.com/watch?v=n3VrPAR9Yvs Dialog with Chalmers avatar] | |||
==Tuesday, April 29 (13:30 - 16:30)== | ==Tuesday, April 29 (13:30 - 16:30) An introduction to the statistical foundations of AI == | ||
'''[https://buffalo.box.com/v/AI-and-the-Future An introduction to the statistical foundations of AI]''' | '''[https://buffalo.box.com/v/AI-and-the-Future An introduction to the statistical foundations of AI]''' | ||
'''[ | '''[https://buffalo.box.com/v/Statistical-Foundations-of-AI Video] | ||
'''The types of AI | '''The types of AI | ||
| Line 163: | Line 178: | ||
::Neurosymbolic AI | ::Neurosymbolic AI | ||
:Background | :Background | ||
==Wednesday April 30 (13:30 - 16:30)== | ''Why machines will never rule the world'', chapter 7 (chapter 8 of 2nd edition) | ||
==Wednesday April 30 (13:30 - 16:30) Explicit, implicit, practical, personal and tacit knowledge== | |||
'''Personal knowledge''' | '''Personal knowledge''' | ||
:[https://buffalo.box.com/v/AI-and-Creativity Explicit, implicit, practical, personal and tacit knowledge] | :[https://buffalo.box.com/v/AI-and-Creativity Explicit, implicit, practical, personal and tacit knowledge] | ||
:[https://buffalo.box.com/v/Practical-knowledge Video] | |||
:Knowing how vs Knowing that | :Knowing how vs Knowing that | ||
Latest revision as of 16:28, 10 October 2025
Philosophy and Artificial Intelligence 2025
Jobst Landgrebe and Barry Smith
MAP, USI, Lugano, Spring 2025
Introduction
Artificial Intelligence (AI) is the subfield of Computer Science devoted to developing programs that enable computers to display behavior that can (broadly) be characterized as intelligent. On the strong version, the ultimate goal of AI is to create what is called General Artificial Intelligence (AGI), by which is meant an artificial system that is as intelligent as a human being.
Since its inception in the middle of the last century AI has enjoyed repeated cycles of enthusiasm and disappointment (AI summers and winters). Recent successes of ChatGPT and other Large Language Models (LLMs) have opened a new era of popularization of AI. For the first time, AI tools have been created which are immediately available to the wider population, who for the first time can have real hands-on experience of what AI can do.
These developments in AI open up a series of questions such as:
- Will the powers of AI continue to grow in the future, and if so will they ever reach the point where they can be said to have intelligence equivalent to or greater than that of a human being?
- Could we ever reach the point where we can accept the thesis that an AI system could have something like consciousness or sentience?
- Could we reach the point where an AI system could be said to behave ethically, or to have responsibility for its actions?
- Can quantum computers enable a stronger AI than what we have today?
- Can a computer have desires, a will, and emotions?
- Can a computer have responsibility for its behavior?
- Could a machine have something like a personal identity? Would I really survive if the contents of my brain were uploaded to the cloud?
We will describe in detail how stochastic AI works, and consider these and a series of other questions at the borderlines of philosophy and AI. The class will close with presentations of papers on relevant topics given by students.
Some of the material for this class is derived from our book
- Why Machines Will Never Rule the World: Artificial Intelligence without Fear (1st Edition, Routledge 2022).
and from the companion volume
- Symposium on Why Machines Will Never Rule the World — Guest editor, Janna Hastings, University of Zurich
which appeared as a special issue of the public access journal Cosmos + Taxis in early 2024.
Faculty
Jobst Landgrebe is the founder and CEO of Cognotekt, GmBH, an AI company based in Cologne specialised in the design and implementation of holistic AI solutions. He has 20 years experience in the AI field, 8 years as a management consultant and software architect. He has also worked as a physician and mathematician, and he views AI itself -- to the extent that it is not an elaborate hype -- as a branch of applied mathematics. CUrrently his primary focus is in the biomathematics of cancer.
Barry Smith is one of the world's most widely cited philosophers. He has contributed primarily to the field of applied ontology, which means applying philosophical ideas derived from analytical metaphysics to the concrete practical problems which arise where attempts are made to compare or combine heterogeneous bodies of data.
Grading
- Essay with presentation: 80%
- Essay with no presentation: 95%
- Presentation: 15%
- Class Participation 5%
Draft Schedule
Monday, February 17 (14:30-17:15) Introduction
Part 1: An introduction to the course: Impact of philosophy on AI; impact of AI on philosophy
Part 2: What are the essential marks of human intelligence?
The classical psychological definitions of intelligence are:
- A. the ability to adapt to new situations (applies both to humans and to animals)
- B. a very general mental capability (possessed only by humans) that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience
Can a machine be intelligent in either of these senses?
Readings:
- Linda S. Gottfredson. Mainstream Science on Intelligence. In: Intelligence 24 (1997), pp. 13–23.
- Jobst Landgrebe and Barry Smith: There is no Artificial General Intelligence
- Jobst Landgrebe: Deep reasoning, abstraction and planning
Background: Ersatz Definitions, Anthropomorphisms, and Pareidolia
- There's no 'I' in 'AI', Steven Pemberton, Amsterdam, December 12, 2024
- 1. Esatz definitions: using words like 'thinks' as in 'the machine is thinking', but with meanings quite different from those we use when talking about human beings. As when we define 'flying' as moving through the air, and then jumping up and down and saying "look, I'm flying!"
- 2. Pareidolia: a psychological phenomenon that causes people to see patterns, objects, or meaning in ambiguous or unrelated stimuli
- 3. If you can't spot irony, you're not intelligent
Tuesday February 18 (09:30-12:15) Limits and Dangers of AI?
1. Surveys the technical fundamentals of AI: Methods, mathematics, usage as well
2. Outlines the theory of complex systems documented in our book
3. Shows why AI cannot model complex systems adequately and synoptically, and why they therefore cannot reach a level of intelligence equal to that of human beings.
Background:
Will AI Destroy Humanity? A Soho Forum Debate (Spoiler: Jobst won)
R.V. Yampolskii, AI: Unexplainable, Unpredictable, Uncontrollable
Arvind Narayanan and Sayash Kapoor, AI Snake Oil
Arnold Schelsky, The Hype Book, especially Chapter 1.
Wednesday, February 19 (13:30 - 16:15) Transhumanism and digital immortality
1. Surveys the full spectrum of transhumanism and its cultural origins.
2. Debunk the feasibility of radically improving human beings via technology.
Background:
- TESCREALISM, or: why AI gods are so passionate about creating Artificial General Intelligence
- Considering the existential risk of Artificial Superintelligence
Thursday, February 20 (9:30 - 12:15) Can a machine be conscious?
The machine will
Computers cannot have a will, because computers don't give a damn. Therefore there can be no machine ethics
- The lack of the giving-a-damn-factor is taken by Yann LeCun as a reason to reject the idea that AI might pose an existential risk to humanity – an AI will have no desire for self-preservation “Almost half of CEOs fear A.I. could destroy humanity five to 10 years from now — but ‘A.I. godfather' says an existential threat is ‘preposterously ridiculous’” Fortune, June 15, 2023. See also here.
Implications of the absence of a machine will:
- The problem of the singularity (when machines will take over from humans) will not arise
- The idea of digital immortality will never be realized Slides
- The idea that human beings are simulations can be rejected
- There can be no AI ethics (only: ethics governing human beings when they use AI)
- Fermi's paradox is solved
Background:
Machines cannot have intentionality; they cannot have experiences which are about something.
- Searle: Minds, Brains, and Programs
Monday, April 28 (14:30 - 17:30) Are we living in a simulation
Are we living in a simulation?
The Fermi Paradox
Bostrom's Simulation Argument
Background
David Chalmers, Reality+
Tuesday, April 29 (13:30 - 16:30) An introduction to the statistical foundations of AI
An introduction to the statistical foundations of AI
The types of AI
- Deterministic AI
- Good old fashioned AI (GOFAI)
- Basic stochastic AI
- How regression works
- Advanced stochastic AI
- Neural networks and deep learning
- Hybrid
- Neurosymbolic AI
- Background
Why machines will never rule the world, chapter 7 (chapter 8 of 2nd edition)
Wednesday April 30 (13:30 - 16:30) Explicit, implicit, practical, personal and tacit knowledge
Personal knowledge
- Knowing how vs Knowing that
- Personal knowledge and science
- Creativity
- Empathy
- Entrepreneurship
- Leadership and control (and ruling the world)
Complex Systems and Cognitive Science: Why the Replication Problem is here to stay
- The 'replication problem' is the the inability of scientific communities to independently confirm the results of scientific work. Much has been written on this problem especially as it arises in (social) psychology, and on potential solutions under the heading of 'open science'. But we will see that the replication problem has plagued medicine as a positive science since its beginnings (Virchov and Pasteur). This problem has become worse over the last 30 years and has massive consequences for healthcare practice and policy.
Friday May 2 (13:30-16:30) Are We Living in a Simulation?
- Are we living in a simulation?, Slides
Background Material
An Introduction to AI for Philosophers
(AI experts are invited to criticize what I have to say in this talk)
An Introduction to Philosophy for Computer Scientists
(Philosophers are invited to criticize what I have to say in this talk)