Philosophy and Artificial Intelligence 2022: Difference between revisions
Line 12: | Line 12: | ||
==Tuesday March 1 2022 15.30 - 18.15 (3h): Why Machines Will Never Rule the World == | ==Tuesday March 1 2022 15.30 - 18.15 (3h): Why Machines Will Never Rule the World == | ||
:::Announcement: ''[https://www.routledge.com/Why-Machines-Will-Never-Rule-the-World-Artificial-Intelligence-without/Landgrebe-Smith/p/book/9781032309934 Why Machines Will Never Rule the World]'' | |||
[https://buffalo.box.com/v/Machine-intentioinality Slides] | [https://buffalo.box.com/v/Machine-intentioinality Slides] |
Revision as of 00:42, 21 February 2022
Jobst Landgrebe and Barry Smith
MAP, USI, Lugano, Spring 2022
Much of the material for this class is derived from the book Why Machines Will Never Rule the Earth: Artificial Intelligence without Fear, currently in production with Routledge, co-authored by Landgrebe and Smith.
Jobst Landgrebe is the founder and CEO of Cognotekt, GmBH, an AI company based in Cologne specialised in the design and implementation of holistic AI solutions. He has 17 years experience in the AI field, 8 years as a management consultant and software architect. He has also worked as a physician and mathematician.
Barry Smith is one of the world's most widely cited philosophers. He has contributed primarily to the field of applied ontology, which means applying philosophical ideas derived from analytical metaphysics to the concrete practical problems which arise where attempts are made to compare or combine heterogeneous bodies of data.
Draft Schedule
Tuesday March 1 2022 15.30 - 18.15 (3h): Why Machines Will Never Rule the World
- Announcement: Why Machines Will Never Rule the World
Introduction to the class
What is computation?
What is a language?
- The Turing Test and the problem of natural language production
Monism and the mind-body-continuum
What is consciousness?
What is will?
Can machines have a will?
What is intentionality?
Readings:
- John Searle: Minds, Brains, and Programs
- Jobst Landgrebe and Barry Smith: There is no Artificial General Intelligence
- Announcement: Why Machines Will Never Rule the World
Wednesday March 2 2022 13.30 - 16.15 (3h) The human mind; animal, human and machine intelligence
Intelligence
The classical psychological definitions of intelligence are:
- A. the ability to adapt to new situations (applies both to humans and to animals)
- B. a very general mental capability (possessed only by humans) that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience
What are the essential marks of human intelligence?
For consideration in Wednesday's session: to what extent can artificial intelligence be achieved?
Readings:
- Linda S. Gottfredson. Mainstream Science on Intelligence. In: Intelligence 24 (1997), pp. 13–23.
The Legg-Hutter Definition of Intelligence
What is it that researchers and engineers are trying to do when they talk of achieving ‘Artificial Intelligence’?
To what extent can AI be achieved?
Problems with the Legg-Hutter Definition of Intelligence
Readings:
- Shane Legg and Marcus Hutter: Universal Intelligence: A Definition of Machine Intelligence
- Jobst Landgrebe and Barry Smith: Making AI Meaningful Again
Thursday March 3 2022 08.30 - 12.00 (4h) AI for Philosophers and Philosophy for Computer Scientists
An Introduction to AI for Philosophers (AI experts are invited to criticize what I have to say here) Slides
An Introduction to Philosophy for Computer Scientists
Reading: John McCarthy, "What has AI in common with philosophy?"
Tuesday May 17 2022 13.30 - 16.15 (3h) AI Ethics - Why Not Robot Cops?
What is the basis of ethics as applied to humans?
- Utilitarianism
- Value ethics
On what basis should we build an AI ethics?
On why AI ethics is (a) impossible, (b) unnecessary
Readings:
- Moor: Four kinds of ethical robots
- Jobst Landgrebe and Barry Smith: No AI Ethics
- Crane: The AI Ethics Hoax
Wednesday May 18 2022 08.30 - 11.15 (3h) Some Philosophical Questions About AI: Part 1
There Will Be No Singularity: A Survey of the Argument
- The Dreyfus case against the possibility of AGI
- The Landgrebe-Smith case against the possibility of AGI
- Three Types of Impossibility: Technical, Physical, Mathematical
- Structure of the book:
- Part I: Properties of the Human Mind
- Nomological materialistic monism
- Alternative views on the mind-body problem
- Human and machine intelligence
- Capabilities
- Primal intelligence
- Objectifying intelligence
- Definitions of intelligence in AI
- The Legg-Hutter definition (see Feb. 24, above)
- Defining useful machine intelligence
- What is language?
- Language and intentions
- Speech as sensorimotor activity
- Language and dialect change
- The variance and complexity of human language
- Reading: There Will Be No AGI
- Conversation and contexts
- Language production (explicit); language interpretation (implicit)
- The Turing test
- Context horizon
- Social, spatial, temporal context
- Conversation flow and interruptions
- Social and ethical behaviour (see Feb. 26, above)
- Can we build an AI by emulating the brain?
- David Chalmers on Brain Emulation
- Can we build an AI by some other method?
- David Chalmers on Artificial Evolution
- David J. Chalmers: The Singularity: A Philosophical Analysis
- David J. Chalmers: The Singularity: A Reply to Commentators
- David Chalmers on Brain Emulation
Thursday May 19 2022 08.30 - 11.15 (3h) Some Philosophical Questions About AI: Part 2
Monday May 23 2022 08.30 - 11.15 (3h) Logic and Complex Systems: Part 1
- The Limits of Mathematical Models
- Models
- All science requires mathematical models
- Types of models 1: descriptive, explanatory, predictive
- Types of models 2: qualitative, quantitative
- All predictive models are quantitative
- Synoptic models
- Adequate models
- Computability
- All AI engineering requires mathematical models
- Explicit and implicit mathematical models
- Systems
- System elements and system interactions
- Systems are fiat entities: they are a product of delimitation
- System boundaries
- Relatively isolated systems
- Models
- 'The Limits and Potential of AI
- Initial utterance production
- Modelling dialogue dynamics mathematically
- Mathematical models of human conversations
- Current state-of-the-art in dialogue systems
- Why conversation machines are doomed to fail
- Chapter 11 Why machines will not master social interaction 224
- No AI emulation of social behaviour
- Some examples
- No machine intersubjectivity
- No machine social norms
- AI and legal norms
- No machine emulation of morality
- No explicit ethical agents
- No AGI in the kill chain
Tuesday May 24 2022 13.30 - 16.15 (3h) Logic and Complex Systems: Part 2
- AI and the Mathematics of Complex Systems
- Preliminary Slides:AI and the Mathematics of Complex Systems
- Preliminary Slides
- Complex systems
- Comprehensive and partial models
- The scope of extended Newtonian mathematics
- Seven Properties of complex systems
- Examples of complex systems
- Human beings as complex systems
- Complex systems of complex systems
- Animate complex systems are organized and stable
- Mathematical models of complex systems
- Multivariate distributions
- Adequate models for complex systems
- Predictive models of complex systems
- Why we ain’t rich
- Example of a social fact
- Why we ain’t rich
- Approaches to complex system modelling
- Naïve approaches
- Consequences for AI applications
- Refined approaches
- Scaling
- Explicit networks
- Evolutionary process models
- Entropy models
- Complex system emulation requires complex systems
- Complex systems
- AI and the Ontology of Power, Social Interaction and Ethics
- Preliminary Video
Wednesday May 25 2022 08.30-11.15 (3h) Concluding Survey
Student Presentations
- Giacomo De Colle: Mind Embodied and Embedded
- Rocco Felici: On Black Box Models in AI Ethics
- Julius Schulte: Explainable AI: How Disciplines Talk Past Each Other
- Gabriel Carraretto: Backpropagation and the Brain
- Michele Damian: Performance vs. Competence in Human–Machine Comparisons
Course Description
Artificial Intelligence (AI) is the subfield of Computer Science devoted to developing programs that enable computers to display behavior that can (broadly) be characterised as intelligent. On the strong version, the ultimate goal of AI is to create an artificial system that is as intelligent as a human being. Recent striking successes such as AlphaGo have convinced many not only that this objective is obtainable but also that in a not too distant future machines will become even more intelligent than human beings.
The actual and possible developments in AI open up a series of striking questions such as:
- Can a computer have a conscious mind?
- Can it have desires and emotions?
- Would machine intelligence, if there is such a thing, be something comparable to human intelligence or something quite different?
In addition, these developments make it possible for us to consider a series of philosophical questions in a new light, including:
- What is personal identity? Could a machine have something like a personal identity? Would I really survive if the contents of my brain were uploaded to the cloud?
- What is it for a human to behave in an ethical manner? (Could there be something like machine ethics? Could machines used in fighting wars be programmed to behave ethically?)
- What is a meaningful life? If routine, meaningless work in the future is performed entirely by machines, will this make possible new sorts of meaningful lives on the part of humans?
After introducing the relevant ideas and tools from both AI and philosophy, all the aforementioned questions will be thoroughly addressed in class discussions following lectures by Drs Facchini and Smith and presentations of relevant papers by the students.
Further Background Reading
- Max More and Natasha Vita-More (Eds.), The Transhumanist Reader: Classical and Contemporary Essays on the Science, Technology, and Philosophy of the Human Future, Wiley-Blackwell, 2013.