Philosophy and Artificial Intelligence 2025: Difference between revisions

From NCOR Wiki
Jump to navigationJump to search
 
(93 intermediate revisions by the same user not shown)
Line 5: Line 5:
[https://www.usi.ch/en/education/master/philosophy MAP, USI, Lugano], Spring 2025
[https://www.usi.ch/en/education/master/philosophy MAP, USI, Lugano], Spring 2025


'''Background'''  
'''Introduction'''  


Artificial Intelligence (AI) is the subfield of Computer Science devoted to developing programs that enable computers to display behavior that can (broadly) be characterized as intelligent. On the strong version, the ultimate goal of AI is to create what is called ''General Artificial Intelligence'' (AGI), by which is meant an artificial system that is as intelligent as a human being.  
Artificial Intelligence (AI) is the subfield of Computer Science devoted to developing programs that enable computers to display behavior that can (broadly) be characterized as intelligent. On the strong version, the ultimate goal of AI is to create what is called ''General Artificial Intelligence'' (AGI), by which is meant an artificial system that is as intelligent as a human being.  


Since its inception in the middle of the last century AI has enjoyed repeated cycles of enthusiasm and disappointment (AI summers and winters). Recent successes of ChatGPT and other Large Language Models (LLMs) have opened a new era popularization of AI. For the first time, AI tools have been created which are immediately available to the wider population, who for the first time can have real hands-on experience of what AI can do.  
Since its inception in the middle of the last century AI has enjoyed repeated cycles of enthusiasm and disappointment (AI summers and winters). Recent successes of ChatGPT and other Large Language Models (LLMs) have opened a new era of popularization of AI. For the first time, AI tools have been created which are immediately available to the wider population, who for the first time can have real hands-on experience of what AI can do.  


These developments in AI open up a series of questions such as:
These developments in AI open up a series of questions such as:
: Will the powers of AI continue to grow in the future, and if so will they ever reach the point where they can be said to have intelligence equivalent to or greater than that of a human being?
: Will the powers of AI continue to grow in the future, and if so will they ever reach the point where they can be said to have intelligence equivalent to or greater than that of a human being?
: Could we ever reach the point where we can accept the thesis that an AI system could have something like consciousness or sentience?
: Could we ever reach the point where we can accept the thesis that an AI system could have something like consciousness or sentience?
: Could we reach the point where an AI system could be said to behave ethically, or to have responsibility for its actions.
: Could we reach the point where an AI system could be said to behave ethically, or to have responsibility for its actions?
: Can quantum computers enable a stronger AI than what we have today?
: Can quantum computers enable a stronger AI than what we have today?
: Can a computer have desires, a will, and emotions?
: Can a computer have responsibility for its behavior?
: Could a machine have something like a personal identity? Would I really survive if the contents of my brain were uploaded to the cloud?


We will describe in detail how stochastic AI work, and consider these and a series of other questions at the borderlines of philosophy and AI. The class will close with presentations of papers on relevant topics given by students.
We will describe in detail how stochastic AI works, and consider these and a series of other questions at the borderlines of philosophy and AI. The class will close with presentations of papers on relevant topics given by students.


Some of the material for this class is derived from our book  
Some of the material for this class is derived from our book  


:''[https://buffalo.app.box.com/v/AI-Without-Fear Why Machines Will Never Rule the World: Artificial Intelligence without Fear]'' (Routledge 2022).  
:''[https://buffalo.box.com/v/Why-Machines Why Machines Will Never Rule the World: Artificial Intelligence without Fear]'' (1st Edition, Routledge 2022).  


and from the companion volume
and from the companion volume
Line 28: Line 31:


which appeared as a special issue of the public access journal ''Cosmos + Taxis'' in early 2024.  
which appeared as a special issue of the public access journal ''Cosmos + Taxis'' in early 2024.  


'''Faculty'''
'''Faculty'''
Line 35: Line 37:


Barry Smith is one of the world's most [https://scholar.google.com/citations?view_op=search_authors&hl=en&mauthors=label:metaphysics widely cited philosophers]. He has contributed primarily to the field of applied ontology, which means applying philosophical ideas derived from analytical metaphysics to the concrete practical problems which arise where attempts are made to compare or combine heterogeneous bodies of data.  
Barry Smith is one of the world's most [https://scholar.google.com/citations?view_op=search_authors&hl=en&mauthors=label:metaphysics widely cited philosophers]. He has contributed primarily to the field of applied ontology, which means applying philosophical ideas derived from analytical metaphysics to the concrete practical problems which arise where attempts are made to compare or combine heterogeneous bodies of data.  
'''Topics treated in 2024'''
*The Glory and the Misery of ChatGPT
**What is AI? How does it work? What are its limits?
*What is intelligence. Can a machine be intelligent?
*Capabilities, or: What do IQ tests measure?
*The human brain and the theory of complex systems
*Can an Artificial Intelligence act? What is agency?
*AI and economic planning
*The ontology of physics
*AI and the Replication Problem
'''Course Description'''
Artificial Intelligence (AI) is the subfield of Computer Science devoted to developing programs that enable computers to display behavior that can (broadly) be characterized as intelligent. On the strong version, the ultimate goal of AI is to create an artificial system that is as intelligent as a human being. Recent striking successes such as AlphaFold have convinced many not only that this objective is obtainable but also that in a not too distant future machines will become even more intelligent than human beings.
The actual and possible developments in AI open up a series of striking questions such as:
*Can a computer have a conscious mind?
*Can a computer have desires, a will, and emotions?
*Can a computer have responsibility for its behavior
*Would machine intelligence, if there is such a thing, be something comparable to human intelligence or something quite different?
In addition, new developments in the AI field make it possible for us to consider a series of philosophical questions in a new light, including:
*Could a machine have something like a personal identity? Would I really survive if the contents of my brain were uploaded to the cloud?
*What is it for a human to behave in an ethical manner? (Could there be something like machine ethics? Could machines used in fighting wars be programmed to behave ethically?)
*What is a meaningful life? If routine, meaningless work in the future is performed entirely by machines, will this make possible new sorts of meaningful lives on the part of humans?
After introducing the relevant ideas and tools from both AI and philosophy, all the aforementioned questions will be thoroughly addressed in class discussions. The class will close with presentations of papers on relevant topics given by students.


'''Grading'''
'''Grading'''
Line 82: Line 46:
'''Draft Schedule'''
'''Draft Schedule'''


==Monday, February 17 (14:30-17:15)==
==Monday, February 17 (14:30-17:15) Introduction==
<!--


Introduction: Philosophy and Artificial Intelligence
'''Part 1: An introduction to the course: Impact of philosophy on AI; impact of AI on philosophy'''


We begin with a survey of the development of AI research from 1970 to today, paying attention especially to the background role of ontology (Knowledge Graphs) in  this development.
:[https://buffalo.box.com/v/Living-in-a-Simulation-JL Slides]


We then outline the main theses of the recent book, ''[https://buffalo.app.box.com/v/AI-Without-Fear Why Machines Will Never Rule the World]'', by Landgrebe and Smith, before moving on to discuss the relation between a human mind and the intelligence that might be ascribed to a machine.
:[https://buffalo.box.com/v/Introduction-2025 Video]


[https://buffalo.box.com/v/Introduction-Philosophy-and-AI Slides]
:[https://buffalo.app.box.com/v/AI-Without-Fear Why Machines Will Never Rule the World]
-->


==Tuesday February 18 (09:30-12:15)==
'''Part 2: What are the essential marks of human intelligence?'''
<!--
The Glory and the Misery of ChatGPT==
 
:Room: A23
 
''Part 1: What is intelligence?''
 
All students are expected to have some familiarity with [https://www.youtube.com/watch?v=tt-JzB50sJE Searle's Chinese Room Argument].
 
Machines cannot have intentionality; they cannot have experiences which are ''about'' something. Searle: [https://www.law.upenn.edu/live/files/3413-searle-j-minds-brains-and-programs-1980.pdf  Minds, Brains, and Programs]
 
:[https://buffalo.box.com/s/vw99q3dexcdd4as4a8blvfh33woq2wbg Slides]
 
What are the essential marks of human intelligence? 


The classical psychological definitions of intelligence are:  
The classical psychological definitions of intelligence are:  
Line 117: Line 65:
Can a machine be intelligent in either of these senses?
Can a machine be intelligent in either of these senses?


Readings:
:[https://buffalo.box.com/v/What-do-IQ-tests-2022 Slides on IQ tests]
 
'''Readings:'''
 
:Linda S. Gottfredson. [https://www1.udel.edu/educ/gottfredson/reprints/1994WSJmainstream.pdf Mainstream Science on Intelligence]. In: ''Intelligence'' 24 (1997), pp. 13–23.
:Linda S. Gottfredson. [https://www1.udel.edu/educ/gottfredson/reprints/1994WSJmainstream.pdf Mainstream Science on Intelligence]. In: ''Intelligence'' 24 (1997), pp. 13–23.
Human and machine intelligence
:Jobst Landgrebe and Barry Smith: [https://arxiv.org/pdf/1906.05833.pdf There is no Artificial General Intelligence]
:Jobst Landgrebe and Barry Smith: [https://arxiv.org/pdf/1906.05833.pdf There is no Artificial General Intelligence]
:Jobst Landgrebe: [https://buffalo.box.com/v/Deep-reasoning Deep reasoning, abstraction and planning]


''Part 2: An introduction to ChatGPT. How is it built? How does it work? Is it intelligent?''
'''Background''': Ersatz Definitions, Anthropomorphisms, and Pareidolia


Can ChatGPT become intelligent?
:[https://www.youtube.com/watch?v=lS4-QSR1sNk There's no 'I' in 'AI'], Steven Pemberton, Amsterdam, December 12, 2024
::1. Esatz definitions: using words like 'thinks' as in 'the machine is thinking', but with meanings quite different from those we use when talking about human beings. As when we define 'flying' as moving through the air, and then jumping up and down and saying "look, I'm flying!"
::2. Pareidolia: a psychological phenomenon that causes people to see patterns, objects, or meaning in ambiguous or unrelated stimuli
::3. If you can't spot irony, you're not intelligent


Are Large Language Models a threat to humanity?
==Tuesday February 18 (09:30-12:15) Limits and Dangers of AI? ==


Capabilities, or: What do IQ tests measure?
[https://buffalo.box.com/v/Landgrebe-AI-Feb-2025 Video]
:[https://buffalo.box.com/v/What-do-IQ-tests-2022 Slides]


Is Psychology Finished?
[https://buffalo.box.com/v/Limits-of-AI-Lugano-2025 Slides]
:[https://buffalo.box.com/v/Is-Psychology-Finished? Slides]


The human brain and the Theory of complex systems
1. Surveys the technical fundamentals of AI: Methods, mathematics, usage as well


:Jobst Landgrebe and Barry Smith: [https://arxiv.org/pdf/1901.02918.pdf Making AI Meaningful Again]
2. Outlines the theory of complex systems documented in [https://buffalo.app.box.com/v/AI-Without-Fear our book]
:[https://www.academia.edu/93423257/Introduction_to_the_Theory_of_Complex_Systems S. Thurner et al. (2018): Introduction to the theory of complex systems (Oxford)]
-->


==Wednesday, February 19 (13:30 - 16:15)==
3. Shows why AI cannot model complex systems adequately and synoptically, and why they therefore cannot reach a level of intelligence equal to that of human beings.


Can AI solve physics?
Background:


<!--
'''[https://www.youtube.com/watch?v=1rnam1w8ztM Will AI Destroy Humanity? A Soho Forum Debate]''' (Spoiler: Jobst won)
Can an Artificial Intelligence Act? ==
 
R.V. Yampolskii, ''[https://www.amazon.com/Unexplainable-Unpredictable-Uncontrollable-Artificial-Intelligence/dp/103257626X?asin=103257626X&revisionId=&format=4&depth=1 AI: Unexplainable, Unpredictable, Uncontrollable]''
 
Arvind Narayanan and Sayash Kapoor, ''[https://press.princeton.edu/books/hardcover/9780691249131/ai-snake-oil AI Snake Oil]''
 
Arnold Schelsky, ''[https://buffalo.box.com/v/TheHypeCycle The Hype Book]'', especially Chapter 1.


:Room: A23
==Wednesday, February 19 (13:30 - 16:15) Transhumanism and digital immortality==


:[https://buffalo.box.com/s/n70b84o813hp0jsfnznczeg8sid9mtnv AI Ethics Slides]
[https://buffalo.box.com/v/Transhumanism-2025 Video]


''What is agency?''
[https://buffalo.box.com/v/Transhumanism-Lugano-2025 Slides]


:Featuring Emanuele Martinelli. Emanuele is a PhD student in philosophy at the University of Zurich, co-affiliated with the Chair of Political Philosophy and the Digital Society Initiative. He has a bachelor's in philosophy and a master's in philosophy and economics, both from USI.
1. Surveys the full spectrum of transhumanism and its cultural origins.


:What are the different types of agency?
2. Debunk the feasibility of radically improving human beings via technology.
:Examples of collective agency, government agency, agency of socio-technical systems (armies, corporations, ...)
:Relation between agency and responsibility. (Responsibility as the origin of ethics.)
:Can an AI be responsibe?
:Can there be such a thing as an AI will?


''Agency and the capacities and limits of AI''
Background:  
:Case study: AI and economic planning
:Hayek's knowledge problem
:The price system and market competition
:Market economies vs planned economies: the agents at stake
:Proposed ways to use AI to plan the economy
:The role of the entrepreneur
:Can AI be entrepreneurial


Background reading:
:[https://www.truthdig.com/articles/the-acronym-behind-our-wildest-ai-dreams-and-nightmares/ TESCREALISM], or: why AI gods are so passionate about creating Artificial General Intelligence
:"[https://www.rcandela.com/uploads/2/0/1/6/20163847/boettke_and_candela_on_the_feasibility_of_technosocialism.pdf On the Feasibility of Technosocialism]"
:[https://michaelnotebook.com/xriskbrief/index.html Considering the existential risk of Artificial Superintelligence]
:"[http://philsci-archive.pitt.edu/19406/7/List-GA-AI.pdf Group Agency and Artificial Intelligence]"
:"[https://www.argumenta.org/wp-content/uploads/2023/06/Argumenta-82-Emanuele-Martinelli-Toward-a-General-Model-of-Agency-1.pdf Toward a General Model of Agency]"
-->


==Thursday, February 20 (9:30 - 12:15)==  
==Thursday, February 20 (9:30 - 12:15) Can a machine be conscious?==  


<!--
No Machine Will ==


:Room: A23
'''The machine will'''


Computers cannot have a will, because computers ''don't give a damn''. Therefore there can be no machine ethics
Computers cannot have a will, because computers ''don't give a damn''. Therefore there can be no machine ethics
Line 193: Line 132:
:There can be no AI ethics (only: ethics governing human beings when they use AI)
:There can be no AI ethics (only: ethics governing human beings when they use AI)
:Fermi's paradox is solved
:Fermi's paradox is solved
-->


==Monday, April 28 (14:30 - 17:30) ==
Background:
 
[https://buffalo.box.com/v/BS-Lugano-Machine-Will Slides]
 
[https://buffalo.box.com/v/Machine-Consciousness-BS-2025 Video]
 
:[https://www.youtube.com/watch?v=tt-JzB50sJE Searle's Chinese Room Argument]
 
Machines cannot have intentionality; they cannot have experiences which are ''about'' something.
 
:Searle: [https://www.law.upenn.edu/live/files/3413-searle-j-minds-brains-and-programs-1980.pdf  Minds, Brains, and Programs]
 
==Monday, April 28 (14:30 - 17:30) Are we living in a simulation ==
 
'''[https://buffalo.box.com/v/BS-Intelligence-Lugano-2025 Are we living in a simulation?]'''
 
'''[https://buffalo.box.com/v/Living-in-a-simulation Video]
 
The Fermi Paradox
 
Bostrom's Simulation Argument


<!--
Background
:Room: A23


Topics to be dealt with include:
David Chalmers, ''[https://www.amazon.com/Reality-Virtual-Worlds-Problems-Philosophy/dp/0393635805 Reality+]''
[https://www.youtube.com/watch?v=n3VrPAR9Yvs Dialog with Chalmers avatar]


: What is AI?
==Tuesday, April 29 (13:30 - 16:30) An introduction to the statistical foundations of AI ==
: How does it work?
: What are its limits?
: And what about ChatGPT?


Background: '''[https://www.youtube.com/watch?v=1rnam1w8ztM Will AI Destroy Humanity? A Soho Forum Debate]''' (Spoiler: Jobst won)
'''[https://buffalo.box.com/v/AI-and-the-Future An introduction to the statistical foundations of AI]'''


[https://buffalo.box.com/s/wmim09dtsf1dd11rwnlb3wydqbcf3xcl Slides]
'''[https://buffalo.box.com/v/Statistical-Foundations-of-AI Video]


-->
'''The types of AI


==Tuesday, April 29 (13:30 - 16:30)==
:Deterministic AI
<!--
::Good old fashioned AI (GOFAI)
:Basic stochastic AI
::How regression works
:Advanced stochastic AI
::Neural networks and deep learning
:Hybrid
::Neurosymbolic AI


The ontology of physics ==
:Background
:Room: A23


The ontology of physics
''Why machines will never rule the world'', chapter 7 (chapter 8 of 2nd edition)
:[https://buffalo.box.com/s/j6hhv85hfyedzcv3ry0d4by30w3u3eqj Slides]


==Wednesday April 30 (13:30 - 16:30)==
==Wednesday April 30 (13:30 - 16:30) Explicit, implicit, practical, personal and tacit knowledge==
<!--
Are we living in a simulation? (II), digital twins and and Certifiable AI  ==


:Room: A23
'''Personal knowledge'''


On David Chalmers' theory of reality and the role of physics [https://buffalo.box.com/s/x0v0e9wug00t680lne2kyrlradvv7pwm Slides]
:[https://buffalo.box.com/v/AI-and-Creativity Explicit, implicit, practical, personal and tacit knowledge]
-->


==Wednesday April 30 (13:30 - 16:30)==
:[https://buffalo.box.com/v/Practical-knowledge Video]
<!--
The Replication Problem==


:Room: A23
:Knowing how vs Knowing that
:Personal knowledge and science
:Creativity
:Empathy
:Entrepreneurship
:Leadership and control (and ruling the world)


''Complex Systems and Cognitive Science: Why the Replication Problem is here to stay''
''Complex Systems and Cognitive Science: Why the Replication Problem is here to stay''
Line 243: Line 203:
[https://buffalo.box.com/s/m3nu15lqjw0qhpqycz3wjsai057p9jf6 Slides]
[https://buffalo.box.com/s/m3nu15lqjw0qhpqycz3wjsai057p9jf6 Slides]


Background:
==Friday May 2 (13:30-16:30) Are We Living in a Simulation?==
 
:'''[https://buffalo.box.com/v/Living-in-a-Simulation Are we living in a simulation?]''', Slides
 
:[https://buffalo.box.com/v/Living-in-a-Simulation Video]
:'''[https://buffalo.box.com/v/AI=and-the-Future The Future of Artificial Intelligence]''', Slides


<!--
Repication:
:[https://plato.stanford.edu/entries/scientific-reproducibility Reproducibility of Scientific Results], ''Stanford Encyclopedia of Philosophy'', 2018
:[https://plato.stanford.edu/entries/scientific-reproducibility Reproducibility of Scientific Results], ''Stanford Encyclopedia of Philosophy'', 2018
:[https://www.vox.com/future-perfect/21504366/science-replication-crisis-peer-review-statistics Science has been in a “replication crisis” for a decade]
:[https://www.vox.com/future-perfect/21504366/science-replication-crisis-peer-review-statistics Science has been in a “replication crisis” for a decade]
:[https://www.youtube.com/watch?v=HhDGkbw1FdwThe Irreproducibility Crisis and the Lehman Crash], Barry Smith, Youtube 2020
:[https://www.youtube.com/watch?v=HhDGkbw1FdwThe Irreproducibility Crisis and the Lehman Crash], Barry Smith, Youtube 2020
-->


==Friday May 2 (13:30-16:30) Student Presentations and Concluding Survey==
 
<!--
:Room: A23
:Room: A23



Latest revision as of 16:28, 10 October 2025

Philosophy and Artificial Intelligence 2025

Jobst Landgrebe and Barry Smith

MAP, USI, Lugano, Spring 2025

Introduction

Artificial Intelligence (AI) is the subfield of Computer Science devoted to developing programs that enable computers to display behavior that can (broadly) be characterized as intelligent. On the strong version, the ultimate goal of AI is to create what is called General Artificial Intelligence (AGI), by which is meant an artificial system that is as intelligent as a human being.

Since its inception in the middle of the last century AI has enjoyed repeated cycles of enthusiasm and disappointment (AI summers and winters). Recent successes of ChatGPT and other Large Language Models (LLMs) have opened a new era of popularization of AI. For the first time, AI tools have been created which are immediately available to the wider population, who for the first time can have real hands-on experience of what AI can do.

These developments in AI open up a series of questions such as:

Will the powers of AI continue to grow in the future, and if so will they ever reach the point where they can be said to have intelligence equivalent to or greater than that of a human being?
Could we ever reach the point where we can accept the thesis that an AI system could have something like consciousness or sentience?
Could we reach the point where an AI system could be said to behave ethically, or to have responsibility for its actions?
Can quantum computers enable a stronger AI than what we have today?
Can a computer have desires, a will, and emotions?
Can a computer have responsibility for its behavior?
Could a machine have something like a personal identity? Would I really survive if the contents of my brain were uploaded to the cloud?

We will describe in detail how stochastic AI works, and consider these and a series of other questions at the borderlines of philosophy and AI. The class will close with presentations of papers on relevant topics given by students.

Some of the material for this class is derived from our book

Why Machines Will Never Rule the World: Artificial Intelligence without Fear (1st Edition, Routledge 2022).

and from the companion volume

Symposium on Why Machines Will Never Rule the World — Guest editor, Janna Hastings, University of Zurich

which appeared as a special issue of the public access journal Cosmos + Taxis in early 2024.

Faculty

Jobst Landgrebe is the founder and CEO of Cognotekt, GmBH, an AI company based in Cologne specialised in the design and implementation of holistic AI solutions. He has 20 years experience in the AI field, 8 years as a management consultant and software architect. He has also worked as a physician and mathematician, and he views AI itself -- to the extent that it is not an elaborate hype -- as a branch of applied mathematics. CUrrently his primary focus is in the biomathematics of cancer.

Barry Smith is one of the world's most widely cited philosophers. He has contributed primarily to the field of applied ontology, which means applying philosophical ideas derived from analytical metaphysics to the concrete practical problems which arise where attempts are made to compare or combine heterogeneous bodies of data.

Grading

Essay with presentation: 80%
Essay with no presentation: 95%
Presentation: 15%
Class Participation 5%

Draft Schedule

Monday, February 17 (14:30-17:15) Introduction

Part 1: An introduction to the course: Impact of philosophy on AI; impact of AI on philosophy

Slides
Video
Why Machines Will Never Rule the World

Part 2: What are the essential marks of human intelligence?

The classical psychological definitions of intelligence are:  

A. the ability to adapt to new situations (applies both to humans and to animals) 
B. a very general mental capability (possessed only by humans) that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience 

Can a machine be intelligent in either of these senses?

Slides on IQ tests

Readings:

Linda S. Gottfredson. Mainstream Science on Intelligence. In: Intelligence 24 (1997), pp. 13–23.
Jobst Landgrebe and Barry Smith: There is no Artificial General Intelligence
Jobst Landgrebe: Deep reasoning, abstraction and planning

Background: Ersatz Definitions, Anthropomorphisms, and Pareidolia

There's no 'I' in 'AI', Steven Pemberton, Amsterdam, December 12, 2024
1. Esatz definitions: using words like 'thinks' as in 'the machine is thinking', but with meanings quite different from those we use when talking about human beings. As when we define 'flying' as moving through the air, and then jumping up and down and saying "look, I'm flying!"
2. Pareidolia: a psychological phenomenon that causes people to see patterns, objects, or meaning in ambiguous or unrelated stimuli
3. If you can't spot irony, you're not intelligent

Tuesday February 18 (09:30-12:15) Limits and Dangers of AI?

Video

Slides

1. Surveys the technical fundamentals of AI: Methods, mathematics, usage as well

2. Outlines the theory of complex systems documented in our book

3. Shows why AI cannot model complex systems adequately and synoptically, and why they therefore cannot reach a level of intelligence equal to that of human beings.

Background:

Will AI Destroy Humanity? A Soho Forum Debate (Spoiler: Jobst won)

R.V. Yampolskii, AI: Unexplainable, Unpredictable, Uncontrollable

Arvind Narayanan and Sayash Kapoor, AI Snake Oil

Arnold Schelsky, The Hype Book, especially Chapter 1.

Wednesday, February 19 (13:30 - 16:15) Transhumanism and digital immortality

Video

Slides

1. Surveys the full spectrum of transhumanism and its cultural origins.

2. Debunk the feasibility of radically improving human beings via technology.

Background:

TESCREALISM, or: why AI gods are so passionate about creating Artificial General Intelligence
Considering the existential risk of Artificial Superintelligence

Thursday, February 20 (9:30 - 12:15) Can a machine be conscious?

The machine will

Computers cannot have a will, because computers don't give a damn. Therefore there can be no machine ethics

The lack of the giving-a-damn-factor is taken by Yann LeCun as a reason to reject the idea that AI might pose an existential risk to humanity – an AI will have no desire for self-preservation “Almost half of CEOs fear A.I. could destroy humanity five to 10 years from now — but ‘A.I. godfather' says an existential threat is ‘preposterously ridiculous’” Fortune, June 15, 2023. See also here.

Implications of the absence of a machine will:

The problem of the singularity (when machines will take over from humans) will not arise
The idea of digital immortality will never be realized Slides
The idea that human beings are simulations can be rejected
There can be no AI ethics (only: ethics governing human beings when they use AI)
Fermi's paradox is solved

Background:

Slides

Video

Searle's Chinese Room Argument

Machines cannot have intentionality; they cannot have experiences which are about something.

Searle: Minds, Brains, and Programs

Monday, April 28 (14:30 - 17:30) Are we living in a simulation

Are we living in a simulation?

Video

The Fermi Paradox

Bostrom's Simulation Argument

Background

David Chalmers, Reality+

Dialog with Chalmers avatar

Tuesday, April 29 (13:30 - 16:30) An introduction to the statistical foundations of AI

An introduction to the statistical foundations of AI

Video

The types of AI

Deterministic AI
Good old fashioned AI (GOFAI)
Basic stochastic AI
How regression works
Advanced stochastic AI
Neural networks and deep learning
Hybrid
Neurosymbolic AI
Background

Why machines will never rule the world, chapter 7 (chapter 8 of 2nd edition)

Wednesday April 30 (13:30 - 16:30) Explicit, implicit, practical, personal and tacit knowledge

Personal knowledge

Explicit, implicit, practical, personal and tacit knowledge
Video
Knowing how vs Knowing that
Personal knowledge and science
Creativity
Empathy
Entrepreneurship
Leadership and control (and ruling the world)

Complex Systems and Cognitive Science: Why the Replication Problem is here to stay

The 'replication problem' is the the inability of scientific communities to independently confirm the results of scientific work. Much has been written on this problem especially as it arises in (social) psychology, and on potential solutions under the heading of 'open science'. But we will see that the replication problem has plagued medicine as a positive science since its beginnings (Virchov and Pasteur). This problem has become worse over the last 30 years and has massive consequences for healthcare practice and policy.

Slides

Friday May 2 (13:30-16:30) Are We Living in a Simulation?

Are we living in a simulation?, Slides
Video
The Future of Artificial Intelligence, Slides


Background Material

An Introduction to AI for Philosophers

Video
Slides

(AI experts are invited to criticize what I have to say in this talk)

An Introduction to Philosophy for Computer Scientists

Video
Slides

(Philosophers are invited to criticize what I have to say in this talk)

John McCarthy, "What has AI in common with philosophy?"