Philosophy and Artificial Intelligence 2024

From NCOR Wiki
Revision as of 19:29, 29 January 2024 by Phismith (talk | contribs)
Jump to navigationJump to search

Philosophy and Artificial Intelligence 2024

Jobst Landgrebe and Barry Smith

MAP, USI, Lugano, Spring 2024

Background

Artificial Intelligence (AI) is the subfield of Computer Science devoted to developing programs that enable computers to display behavior that can (broadly) be characterized as intelligent. On the strong version, the ultimate goal of AI is to create what is called General Artificial Intelligence (AGI), by which is meant an artificial system that is as intelligent as a human being.

Since its inception in the middle of the last century AI has enjoyed repeated cycles of enthusiasm and disappointment (AI summers and winters). Recent successes of ChatGPT and other Large Language Models (LLMs) have opened a new era popularization of AI. For the first time, AI tools have been created which are immediately available to the wider population, who for the first time can have real hands-on experience of what AI can do.

These developments in AI open up a series of questions such as:

Will the powers of AI continue to grow in the future, and if so will they ever reach the point where they can be said to have intelligence equivalent to or greater than that of a human being?
Could we ever reach the point where we can accept the thesis that an AI system could have something like consciousness or sentience?
Could we reach the point where an AI system could be said to behave ethically, or to have responsibility for its actions.
Can quantum computers enable a stronger AI than what we have today?

We will describe in detail how stochastic AI work, and consider these and a series of other questions at the borderlines of philosophy and AI. The class will close with presentations of papers on relevant topics given by students.

Some of the material for this class is derived from our book

Why Machines Will Never Rule the World: Artificial Intelligence without Fear (Routledge 2022).

and from the companion volume

Symposium on Jobst Why Machines Will Never Rule the World — Guest editor, Janna Hastings, University of Zurich

which will appear as a special issue of the public access journal Cosmos + Taxis in early 2024.


Faculty

Jobst Landgrebe is the founder and CEO of Cognotekt, GmBH, an AI company based in Cologne specialised in the design and implementation of holistic AI solutions. He has 17 years experience in the AI field, 8 years as a management consultant and software architect. He has also worked as a physician and mathematician.

Barry Smith is one of the world's most widely cited philosophers. He has contributed primarily to the field of applied ontology, which means applying philosophical ideas derived from analytical metaphysics to the concrete practical problems which arise where attempts are made to compare or combine heterogeneous bodies of data.

Course Description

Artificial Intelligence (AI) is the subfield of Computer Science devoted to developing programs that enable computers to display behavior that can (broadly) be characterized as intelligent. On the strong version, the ultimate goal of AI is to create an artificial system that is as intelligent as a human being. Recent striking successes such as AlphaFold have convinced many not only that this objective is obtainable but also that in a not too distant future machines will become even more intelligent than human beings.

The actual and possible developments in AI open up a series of striking questions such as:

  • Can a computer have a conscious mind?
  • Can a computer have desires, a will, and emotions?
  • Can a computer have responsibility for its behavior
  • Would machine intelligence, if there is such a thing, be something comparable to human intelligence or something quite different?

In addition, new developments in the AI field make it possible for us to consider a series of philosophical questions in a new light, including:

  • Could a machine have something like a personal identity? Would I really survive if the contents of my brain were uploaded to the cloud?
  • What is it for a human to behave in an ethical manner? (Could there be something like machine ethics? Could machines used in fighting wars be programmed to behave ethically?)
  • What is a meaningful life? If routine, meaningless work in the future is performed entirely by machines, will this make possible new sorts of meaningful lives on the part of humans?

After introducing the relevant ideas and tools from both AI and philosophy, all the aforementioned questions will be thoroughly addressed in class discussions. The class will close with presentations of papers on relevant topics given by students.

Draft Schedule

Tuesday, February 20 (14:30-17:15) Why Machines Will Never Rule the world

Room: A23

This is an introduction to the book, with an emphasis on the relation between a human mind and the intelligence that might be ascribed to a machine

The classical psychological definitions of intelligence are:  

A. the ability to adapt to new situations (applies both to humans and to animals) 
B. a very general mental capability (possessed only by humans) that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience 

What are the essential marks of human intelligence? 

Readings:

Linda S. Gottfredson. Mainstream Science on Intelligence. In: Intelligence 24 (1997), pp. 13–23.

Human and machine intelligence

Jobst Landgrebe and Barry Smith: There is no Artificial General Intelligence

Wednesday February 21 (14:30-17:15 noon): The Glory and the Misery of ChatGPT

Room: A23

An introduction to ChatGPT. How is it built? How does it work? Is it intelligent?

Can ChatGPT become intelligent?

Are Large Language Models a threat to humanity?

Capabilities, or: What do IQ tests measure?

Slides

Is Psychology Finished?

Slides

The human brain and the Theory of complex systems

Jobst Landgrebe and Barry Smith: Making AI Meaningful Again
S. Thurner et al. (2018): Introduction to the theory of complex systems (Oxford)

Thursday, February 22 (14:30 - 17:15): Can an Artificial Intelligence Act?

Room: A23

What is agency?

What are the different types of agency?
Examples of collective agency, government agency, agency of socio-technical systems (armies, corporations, ...)
Relation between agency and responsibility. (Responsibility as the origin of ethics.)
Can an AI be responsibe?
Can there be such a thing as an AI will?

Agency and the capacities and limits of AI

Case study: AI and economic planning
Hayek's knowledge problem
The price system and market competition
Market economies vs planned economies: the agents at stake
Proposed ways to use AI to plan the economy
The role of the entrepreneur
Can AI be entrepreneurial

Background reading:

Friday, February 23 (9:30 - 12:15) Minds, Brains and Programs

Room: A23

Machines cannot have intentionality; they cannot have experiences which are about something.

John Searle: Minds, Brains, and Programs

Computers cannot have a will, because computers don't give a damn:

The lack of the giving-a-damn-factor is taken by Yann LeCun as a reason to reject the idea that AI might pose an existential risk to humanity – an AI will have no desire for self-preservation “Almost half of CEOs fear A.I. could destroy humanity five to 10 years from now — but ‘A.I. godfather' says an existential threat is ‘preposterously ridiculous’” Fortune, June 15, 2023. See also here.

Implications of the absence of a machine will:

The problem of the singularity (when machines will take over from humans) will not arise
The idea of digital immortality will never be realized Slides
The idea that human beings are simulations can be rejected
There can be no AI ethics (only: ethics governing human beings when they use AI)
Fermi's paradox is solved

Monday, May 13 (9:30 - 12:15) Does AI Pose a Threat to Humanity?

Room: A23

Background: Will AI Destroy Humanity? A Soho Forum Debate (Spoiler: Jobst won)

Motion: Artificial Intelligence Poses a Threat to the Survival of Humanity that must be Actively Addressed by Government

Topics to be dealt with include:

Is "AI ethics" a misnomer (rather like "gun ethics" or "car ethics")?
How, if at all, can we regulate the use of AI for military purposes?
Are efforts to regulate AI naive?
Are efforts on the part of big companies to regulate AI in fact attempts by those big companies to block new entrants into the market?

Tuesday, May 14 (9:30 - 12:15) From Turing Machines to Quantum Computers

Room: A23

Turing Machines

Church-Turing computability
Classical computation: binary logic of computers, registers, logic gates and circuits, examples of circuits

Quantum Mechanics

Introduction, double slit, uncertainty, Stern-Gerlach, Hamiltonian, Hilbert space
Quantum computing

Quantum bits, registers, quantum gates, simple quantum algorithm, quantum error (correction), future of quantum computing

Philosophical interpretation of quantum computing

Why quantum computers are Turing machines

Background

Mikhail Dyakonov, The Case Against Quantum Computing
Nielsen and Chuang, Quantum Computation and Quantum Information
Quantum Computing 1
Quantum Computing 2
Slides

Wednesday May 15 (9:30 - 12:15): The Use of AI in Scientific and Medical Research

Room: A23

Thursday May 16 (14:30 - 18:15) AI and Complex Systems

Room: A23

What does it mean to say that Large Language Models are models?

How do we define 'model'?

The Limits of Mathematical Models and the Limits of AI

Slides
All science requires mathematical models
Types of models 1: descriptive, explanatory, predictive
Types of models 2: qualitative, quantitative
All predictive models are quantitative
Synoptic and Adequate models
Computability

Systems

System elements and system interactions
Systems are fiat entities: they are a product of delimitation
System boundaries
Relatively isolated systems
Intentions and drivenness
No emulation of animate drivenness

AI and the Mathematics of Complex Systems

Slides
Comprehensive and partial models
The scope of extended Newtonian mathematics
Seven Properties of complex systems
Examples of complex systems
Human beings as complex systems
Complex systems of complex systems
Animate complex systems are organized and stable

AI and the Ontology of Power, Social Interaction and Ethics

Preliminary Video
Models in Science, from Stanford Encyclopedia of Philosophy

Friday May 17 (9:30-12:15) Student Presentations and Concluding Survey

Room: A23
Student Presentations

Background Material

An Introduction to AI for Philosophers

Video
Slides

(AI experts are invited to criticize what I have to say in this talk)

An Introduction to Philosophy for Computer Scientists

Video
Slides

(Philosophers are invited to criticize what I have to say in this talk)

John McCarthy, "What has AI in common with philosophy?"