Ontology and Artificial Intelligence - Fall 2025: Difference between revisions

From NCOR Wiki
Jump to navigationJump to search
No edit summary
mNo edit summary
Line 13: Line 13:
:Essay (at least 2000 words) : 40%
:Essay (at least 2000 words) : 40%
:Presentation (and accompanying powerpoint deck) on December 8: 40%
:Presentation (and accompanying powerpoint deck) on December 8: 40%
:Class Participation 20%
:Class Participation for in-person and synchronous enrollees: 20%
:Oral exam for asynchronous enrollees: 20%


Attendance at the synchronous session on December 8, featuring student presentations, is compulsory for all students
Attendance at the synchronous session on December 8, featuring student presentations, is compulsory for all students

Revision as of 13:48, 2 August 2025

Department of Philosophy, University at Buffalo

Fall 2025 - PHI637SEM-SMI2 - Special Topics: Ontology and Artificial Intelligence - Class Number 24371

Faculty: Barry Smith

Hybrid

in person: Monday 4-5:50pm, 141 Park Hall
remote synchronous, Monday 4-5:50pm; dial-in details will be supplied by email
remote asynchronous, dial-in details will be supplied by email

Grading

Essay (at least 2000 words) : 40%
Presentation (and accompanying powerpoint deck) on December 8: 40%
Class Participation for in-person and synchronous enrollees: 20%
Oral exam for asynchronous enrollees: 20%

Attendance at the synchronous session on December 8, featuring student presentations, is compulsory for all students

This is a 2 credit hour course. Students taking this course for 3 credit hours will be required to prepare an additional essay (3000 words), class presentation, and powerpoint deck.

Introduction

Ontology (also called 'metaphysics') is a subfield of philosophy which aims to establich the kinds of entities in the world -- including both the material and the mental world -- and the relations between them. Applied ontology applies philosophical ideas and methods to the understanding and to the support of those who are collecting, using, comparing, refining, evaluating or (today above all) generating data.

Artificial Intelligence (AI) is the subfield of Computer Science devoted to developing programs that enable computers to display behavior that can (broadly) be characterized as intelligent. On the strong version, the ultimate goal of AI is to create what is called General Artificial Intelligence (AGI), by which is meant an artificial system that is as intelligent as a human being. ChatGT and other large language models (LLMs) attempt to generate data from other data it obtains for example from crawling the internet.

Some of the material for this class is derived from the book

Why Machines Will Never Rule the World: Artificial Intelligence without Fear (Routledge 2022, revised and enlarged edition published in 2025).

Draft Schedule

Monday, August 25 (4:00-5:50pm) Good Old-Fashioned AI and Its Ontological Origins

Since its inception in the middle of the last century AI has enjoyed repeated cycles of enthusiasm and disappointment (AI summers and winters). Recent successes of ChatGPT and other Large Language Models (LLMs) have opened a new era of popularization of AI. For the first time, AI tools have been created which are immediately available to the wider population, who for the first time can have real hands-on experience of what AI can do.

These developments in AI open up a series of questions such as:

Will the powers of AI continue to grow in the future, and if so will they ever reach the point where they can be said to have intelligence equivalent to or greater than that of a human being?
Could we ever reach the point where we can accept the thesis that an AI system has something like consciousness or sentience?
Could we reach the point where an AI system could be said to behave ethically, or to have responsibility for its actions?
Could quantum computers enable a stronger AI than what we have today?
Can a computer have desires, a will, and emotions?
Can a computer have responsibility for its behavior?
Could a machine have something like a personal identity? Would I really survive if the contents of my brain were uploaded to the cloud?

In this first lecture we will address these and a series of other questions at the borderlines of philosophy and AI.

Some background:

Slides
Why Machines Will Never Rule the World

Monday, September 1 NO CLASS: LABOR DAY

Monday, September 8 (4:00-5:50pm) Natural and Artificial Intelligence

The classical psychological definitions of intelligence are:  

A. the ability to adapt to new situations (applies both to humans and to animals) 
B. a very general mental capability (possessed only by humans) that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience 

Can a machine be intelligent in either of these senses?

Capabilities, or: what do IQ tests measure:

Slides on IQ tests

Readings:

Linda S. Gottfredson. Mainstream Science on Intelligence. In: Intelligence 24 (1997), pp. 13–23.
Jobst Landgrebe and Barry Smith: There is no Artificial General Intelligence

Background: Ersatz Definitions, Anthropomorphisms, and Pareidolia

There's no 'I' in 'AI', Steven Pemberton, Amsterdam, December 12, 2024
1. Esatz definitions: using words like 'thinks' as in 'the machine is thinking', but with meanings quite different from those we use when talking about human beings. As when we define 'flying' as moving through the air, and then jumping up and down and saying "look, I'm flying!"
2. Pareidolia: a psychological phenomenon that causes people to see patterns, objects, or meaning in ambiguous or unrelated stimuli
3. If you can't spot irony, you're not intelligent

Monday, September 15 (4:00-5:50pm) Limits of AI?

1. Surveys the technical fundamentals of AI: Methods, mathematics, usage

2. Distinguishes explicit and implicit mathematics

3. Outlines the theory of complex systems documented in our book

4. Shows why AI cannot model complex systems adequately and synoptically, and why they therefore cannot reach a level of intelligence equal to that of human beings.

Conclusion:

AI is a family of algorithms to automate repetitive events
Deep neural networks have nothing to do with neurons
AI is not artificial 'intelligence'; it is a branch of mathematics in which the attempt is made to use the Turing machine to its limits by using gigantically large amounts of data

Monday, September 22 (4:30 - 16:15) Transhumanism and digital immortality

1. Surveys the full spectrum of transhumanism and its cultural origins.

2. Debunks the feasibility of radically improving human beings via technology.

3. The ontology of the Eruv (why it would take all the fun out of real estate if everyone could live

4. Massive social agency is what generates all good things (e.g. opera, football, ...) -- and requires authority and punishment

Background:

Slides
Scott Adams: We are living in a simulation
AI and the meaning of life:
Robert Nozick, David Steele in Scott Adams and Philosophynext door to John Lennon)
AI and The Matrix
Slides
The Emotion Ontology - Part 1
Slides
Applications of AI to intelligence analysis
Case study: using sentiment analysis for the prediction of terrorist radicalization
Slides
TESCREALISM, or: why AI gods are so passionate about creating Artificial General Intelligence
Considering the existential risk of Artificial Superintelligence
Slides Wittgenstein and the Turing Test

Monday, September 22 (4:00-5:50pm) Can a machine be conscious?

Machines cannot have intentionality; they cannot have experiences which are about something.

Background

Slides
Video
Searle's Chinese Room Argument
Searle: Minds, Brains, and Programs
Making AI Meaningful Again
Søgaard: Do Language Models Have Semantics?

Monday, September 29 (4:00-5:50pm) The machine will

Computers cannot have a will, because computers don't give a damn. Therefore there can be no machine ethics

The lack of the giving-a-damn-factor is taken by Yann LeCun as a reason to reject the idea that AI might pose an existential risk to humanity – an AI will have no desire for self-preservation “Almost half of CEOs fear A.I. could destroy humanity five to 10 years from now — but ‘A.I. godfather' says an existential threat is ‘preposterously ridiculous’” Fortune, June 15, 2023. See also here.

Implications of the absence of a machine will:

The problem of the singularity (when machines will take over from humans) will not arise
The idea of digital immortality will never be realized Slides
There can be no AI ethics (only: ethics governing human beings when they use AI)

Monday, October 6 (4:00-5:50pm) Are we living in a simulation?

Massive social agency is what generates all good things (e.g. opera, football, ...) -- and requires authority and punishment Ontology of the Eruv (why it would take all the fun out of real estate if everyone could live next door to John Lennon)

The Fermi Paradox
Bostrom's Simulation Argument
The idea that human beings are simulations can be rejected
David Chalmers' Reality+

Background

Scott Adams: We are living in a simulation
AI and the meaning of life: Robert Nozick, David Steele in Scott Adams and Philosophy
AI and The Matrix
Slides
Are we living in a simulation?
Are we living in a simulation?
The Future of Artificial Intelligence

Monday October 13 NO CLASS: FALL BREAK

Monday October 20 (4:00-5:50pm)

An introduction to the statistical foundations of AI

Video

The types of AI

Deterministic AI
Good old fashioned AI (GOFAI)
Basic stochastic AI
How regression works
Advanced stochastic AI
Neural networks and deep learning
Hybrid
Neurosymbolic AI
Background reading: Why machines will never rule the world, chapter 8

Monday October 27 (4:00-5:50pm) Patrick Hayes, Neurosymbolic AI, and the Birth of Ontology

Affordances and the background of Artificial Intelligence
Making AI Meaningful Again


Applications of AI to intelligence analysis
Case study: using sentiment analysis for the prediction of terrorist radicalization

Monday November 3 (4:00-5:50pm) Personal Knowledge

Beverley on the use of ontologies to anhance AI models

Monday November 10 (4:00-5:50pm) AI and science: Why the replication problem is here to stay?

Complex Systems and Cognitive Science: Why the Replication Problem is here to stay

The 'replication problem' is the the inability of scientific communities to independently confirm the results of scientific work. Much has been written on this problem especially as it arises in (social) psychology, and on potential solutions under the heading of 'open science'. But we will see that the replication problem has plagued medicine as a positive science since its beginnings (Virchov and Pasteur). This problem has become worse over the last 30 years and has massive consequences for healthcare practice and policy.

Slides

Is Psychology Finished?

Slides

Background

Reproducibility of Scientific Results, Stanford Encyclopedia of Philosophy, 2018
Science has been in a “replication crisis” for a decade
Irreproducibility Crisis and the Lehman Crash, Barry Smith, Youtube 2020

Monday November 17 (4:00-5:50pm)

Monday November 24 (4:00-5:50pm)

Monday December 1 (4:00-5:50pm) Personal knowledge

Explicit, implicit, practical, personal and tacit knowledge
Video
Knowing how vs Knowing that
Personal knowledge and science
Creativity
Empathy
Entrepreneurship
Leadership and control (and ruling the world)

Monday December 8 (4:00-5:50pm) Compulsory (in-person or synchronous online) oral presentation

Background Material

An Introduction to AI for Philosophers

Video
Slides

(AI experts are invited to criticize what I have to say in this talk)

An Introduction to Philosophy for Computer Scientists

Video
Slides

(Philosophers are invited to criticize what I have to say in this talk)

John McCarthy, "What has AI in common with philosophy?"

and from its Coscompanion volume

Symposium on Why Machines Will Never Rule the World