Philosophy and Artificial Intelligence 2021: Difference between revisions

From NCOR Wiki
Jump to navigationJump to search
Line 72: Line 72:
:Jobst Landgrebe and Barry Smith: [https://arxiv.org/pdf/1901.02918.pdf Making AI Meaningful Again]
:Jobst Landgrebe and Barry Smith: [https://arxiv.org/pdf/1901.02918.pdf Making AI Meaningful Again]


==Friday February 26 2021 16:30 - 18:00 Why Not Robot Police? Dialogue With Jobst Landgrebe==
==Friday February 26 2021 16:30 - 18:00 Dialogue with Jobst Landgrebe on AI Ethics==
 
:[https://buffalo.box.com/v/Landgrebe-Ethics Slides]


:[https://www.cognotekt.com/en/ Jobst Landgrebe]) is the founder and CEO of Cognotekt, GmBH, an AI company based in Cologne specialised in the design and implementation of holistic AI solutions. He has 16 years experience in AI field, 8 years as a management consultant and software architect. He has also worked as a physician and mathematician.
:[https://www.cognotekt.com/en/ Jobst Landgrebe]) is the founder and CEO of Cognotekt, GmBH, an AI company based in Cologne specialised in the design and implementation of holistic AI solutions. He has 16 years experience in AI field, 8 years as a management consultant and software architect. He has also worked as a physician and mathematician.

Revision as of 18:14, 28 February 2021

Barry Smith

MAP, USI, Lugano, Spring 2021

Schedule

Monday February 22 2021 14:30 - 17:15: Some examples of philosophical problems

Slides

Introduction to the class

What is computation?

What is a language

The Turing Test and the problem of natural language production

What is consciousness?

What is will?

Can machines have a will?

What is intentionality?

Readings:

John Searle: Minds, Brains, and Programs
Jobst Landgrebe and Barry Smith: There is no Artificial General Intelligence

Tuesday February 23 2021 14:30 - 17:15 The Impossibility of Digital Immortality

Slides

Part One: Immortality

Transhumanism and Identity: Can we download the contents of our brains onto a computer and become immortal?

Why you cannot exist outside your body

Readings:

Martine Rothblatt: Mind is Deeper Than Matter [TO BE SUPPLIED AT USI SITE]
Scott Adams: We are living in a simulation
AI and The Matrix

Part Two: Intelligence

The classical psychological definitions of intelligence are:  

A. the ability to adapt to new situations (applies both to humans and to animals) 
B. a very general mental capability (possessed only by humans) that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience 

What are the essential marks of human intelligence? 

For consideration in Wednesday's session: to what extent can artificial intelligence be achieved? 

Readings:

Linda S. Gottfredson. Mainstream Science on Intelligence. In: Intelligence 24 (1997), pp. 13–23.

Wednesday February 24, 2021 14:30 - 16:00: The Legg-Hutter Definition of 'Universal Intelligence'

Slides Video

What is it that researchers and engineers are trying to do when they talk of achieving ‘Artificial Intelligence’?

To what extent can AI be achieved? 

Problems with the Legg-Hutter Definition of Intelligence

Readings:

Shane Legg and Marcus Hutter: Universal Intelligence: A Definition of Machine Intelligence
Jobst Landgrebe and Barry Smith: Making AI Meaningful Again

Friday February 26 2021 16:30 - 18:00 Dialogue with Jobst Landgrebe on AI Ethics

Slides
Jobst Landgrebe) is the founder and CEO of Cognotekt, GmBH, an AI company based in Cologne specialised in the design and implementation of holistic AI solutions. He has 16 years experience in AI field, 8 years as a management consultant and software architect. He has also worked as a physician and mathematician.
What is the basis of ethics as applied to humans?
Utilitarianism
Value ethics
On what basis should we build an AI ethics?
On why AI ethics is (a) impossible, (b) unnecessary

Readings:

Moor: Four kinds of ethical robots
Jobst Landgrebe and Barry Smith: No AI Ethics TO BE SUPPLIED AT USI SITE

Wednesday May 12 2021 14:30 - 17.15 Brain Emulation

Can we build an AI by emulating the brain?

Chalmers on Brain Emulation

Chalmers on Artificial Evolution

Readings:

David J. Chalmers: The Singularity: A Philosophical Analysis
David J. Chalmers: The Singularity: A Reply to Commentators

Friday May 14 2021 09:30 - 12:15: AI and Ontology

Basic Formal Ontology (BFO) (ISO/IEC 21838-2)
Applications of BFO in AI
Upper Level Ontologies
DOLCE
Slides
Making AI Meaningful Again

Monday May 17 2021 14:30 - 17:15 AI and the Ontology of Complex Systems

AI is a family of algorithms to automate repetitive events
AI is not artificial intelligence; it is a branch of mathematics in which the attempt is made to use the Turing machine to its limits by using gigantically large amounts of data
What sorts of problems can AI not solve?
Paper:There is no general AI

Student presentations

Tuesday May 18 2021 14:30 - 17:15 Language+

An Ontology of Terrorism
Sentiment Analysis
An Ontology of Language
Language+Behaviour
Language+Violence
Slides

Wednesday May 19 2021 14:30 - 17:15 Emotions and Diseases

Slides
Slides
Basic Emotions
Aesthetic Emotions
Disease Ontology
Infectious Disease Ontology
COVID-19 Ontology

Student presentations: TBD

Thursday May 20 2021 13:30 - 16:15 Second Dialogue with Jobst Landgrebe

1. AI and the Mathematics of Complex Systems
2. AI and the Ontology of Power

Slides

Friday-Saturday May 21-22: SNF Conference on Philosophy and Artificial Intelligence

Course Description

Artificial Intelligence (AI) is the subfield of Computer Science devoted to developing programs that enable computers to display behavior that can (broadly) be characterised as intelligent. On the strong version, the ultimate goal of AI is to create an artificial system that is as intelligent as a human being. Recent striking successes such as AlphaGo have convinced many not only that this objective is obtainable but also that in a not too distant future machines will become even more intelligent than human beings.

The actual and possible developments in AI open up a series of striking questions such as:

  • Can a computer have a conscious mind?
  • Can it have desires and emotions?
  • Would machine intelligence, if there is such a thing, be something comparable to human intelligence or something quite different?

In addition, these developments make it possible for us to consider a series of philosophical questions in a new light, including:

  • What is personal identity? Could a machine have something like a personal identity? Would I really survive if the contents of my brain were uploaded to the cloud?
  • What is it for a human to behave in an ethical manner? (Could there be something like machine ethics? Could machines used in fighting wars be programmed to behave ethically?)
  • What is a meaningful life? If routine, meaningless work in the future is performed entirely by machines, will this make possible new sorts of meaningful lives on the part of humans?

After introducing the relevant ideas and tools from both AI and philosophy, all the aforementioned questions will be thoroughly addressed in class discussions following lectures by Drs Facchini and Smith and presentations of relevant papers by the students.

Further Background Reading

Jordan Peterson's Essay Writing Guide
Gerald J. Erion and Barry Smith, “In Defense of Truth: Skepticism, Morality, and The Matrix”, in W. Irwin (ed.), Philosophy and The Matrix, La Salle and Chicago: Open Court, 2002, 16–27.
Max More and Natasha Vita-More (Eds.), The Transhumanist Reader: Classical and Contemporary Essays on the Science, Technology, and Philosophy of the Human Future, Wiley-Blackwell, 2013.