Ontology and Artificial Intelligence - Fall 2025

From NCOR Wiki
Jump to navigationJump to search

Department of Philosophy, University at Buffalo

Fall 2025 - PHI637SEM-SMI2 - Special Topics: Ontology and Artificial Intelligence - Class Number 24371

Faculty: Barry Smith

Hybrid

in person: Monday 4-5:50pm, 141 Park Hall
remote synchronous, Monday 4-5:50pm; dial-in details will be supplied by email
remote asynchronous, dial-in details will be supplied by email

Grading

Essay (at least 2000 words) : 40%
Presentation (and accompanying powerpoint deck) on December 8: 40%
Class Participation 20%

Attendance at the synchronous session on December 8, featuring student presentations, is compulsory for all students

This is a 2 credit hour course. Students taking this course for 3 credit hours will be required to prepare an additional essay (3000 words), class presentation, and powerpoint deck.

Introduction

Ontology (also called 'metaphysics') is a subfield of philosophy which aims to establich the kinds of entities in the world -- including both the material and the mental world -- and the relations between them. Applied ontology applies philosophical ideas and methods to the understanding and to the support of those who are collecting, using, comparing, refining, evaluating or (today above all) generating data.

Artificial Intelligence (AI) is the subfield of Computer Science devoted to developing programs that enable computers to display behavior that can (broadly) be characterized as intelligent. On the strong version, the ultimate goal of AI is to create what is called General Artificial Intelligence (AGI), by which is meant an artificial system that is as intelligent as a human being. ChatGT and other large language models (LLMs) attempt to generate data from other data it obtains for example from crawling the internet.

Some of the material for this class is derived from the book

Why Machines Will Never Rule the World: Artificial Intelligence without Fear (Routledge 2022, revised and enlarged edition published in 2025).

Draft Schedule

Monday, August 25 (4:00-5:50pm) Ontology and the Origins of AI

Since its inception in the last century AI has enjoyed repeated cycles of enthusiasm and disappointment (AI summers and winters). Recent successes of ChatGPT and other Large Language Models (LLMs) have opened a new era of popularization of AI. For the first time, AI tools have been created which are immediately available to the wider population, who for the first time can have real hands-on experience of what AI can do. In this first lecture we will address the origins of AI in Stanford University in the 1970s and '80s.

Topics to be deal with include:

What is ontology?

From Aristotle to 20th century philosophical ontology

Patrick Hayes, Naive Physics and ontology-based robotics

Doug Lennat and the CYC (for 'enCYClopedia' project)

Why CYC failed

Why ontology is still important to AI

Background:

Slides

Monday, September 1 NO CLASS: LABOR DAY

Monday, September 8 (4:00-5:50pm) Natural and Artificial Intelligence

The classical psychological definitions of intelligence are:  

A. the ability to adapt to new situations (applies both to humans and to animals) 
B. a very general mental capability (possessed only by humans) that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience 

Can a machine be intelligent in either of these senses?

Capabilities, or: what do IQ tests measure:

Slides on IQ tests

Human and animal intelligence

Readings:

Linda S. Gottfredson. Mainstream Science on Intelligence. In: Intelligence 24 (1997), pp. 13–23.
Jobst Landgrebe and Barry Smith: There is no Artificial General Intelligence

Background: Ersatz Definitions, Anthropomorphisms, and Pareidolia

There's no 'I' in 'AI', Steven Pemberton, Amsterdam, December 12, 2024
1. Esatz definitions: using words like 'thinks' as in 'the machine is thinking', but with meanings quite different from those we use when talking about human beings. As when we define 'flying' as moving through the air, and then jumping up and down and saying "look, I'm flying!"
2. Pareidolia: a psychological phenomenon that causes people to see patterns, objects, or meaning in ambiguous or unrelated stimuli
3. If you can't spot irony, you're not intelligent

Monday, September 15 (4:00-5:50pm) Limits of AI?

1. Surveys the technical fundamentals of AI: Methods, mathematics, usage

2. Distinguishes explicit and implicit mathematics

3. The ontology of systems

4. Complex systems (based on Why Machines Will Never Rule the World)

5. The limits of Turing machines

5. Why AI cannot model complex systems adequately and synoptically, and why they therefore cannot reach a level of intelligence equal to that of human beings.

Conclusions:

AI is a family of algorithms to automate repetitive events
Deep neural networks have nothing to do with neurons
AI is not artificial 'intelligence'; it is a branch of mathematics in which the attempt is made to use the Turing machine to its limits by using gigantically large amounts of data

Monday, September 22 (4:30 - 16:15) Transhumanism, digital immortality and the Fermi paradox

1. Surveys the full spectrum of transhumanism and its cultural origins.

2. Debunks the feasibility of radically improving human beings via technology.

3. Explains wy AI gods are so passionate about creating Artificial General Intelligence

Background:

TESCREALISM
Transhumanism and the Mind-Body Problem

AI and the meaning of life:

Robert Nozick, David Steele in Scott Adams and Philosophy
AI and The Matrix
There is no general AI
Landgrebe on Transhumanism
Considering the existential risk of Artificial Superintelligence
Scott Adams: We are living in a simulation

Ontology of the Eruv (why it would take all the fun out of real estate if everyone could live next door to John Lennon)

Are we living in a simulation?

Background:

David Chalmers' Reality+
Scott Adams: We are living in a simulation
AI and the meaning of life: Robert Nozick, David Steele in Scott Adams and Philosophy
AI and The Matrix
Slides
Are we living in a simulation?
On Chalmers on Reality+?
The Future of Artificial Intelligence

Monday, September 22 (4:00-5:50pm) Can a machine be conscious?

Machines cannot have intentionality; they cannot have experiences which are about something.

Background

Slides
Video
Searle's Chinese Room Argument
Searle: Minds, Brains, and Programs
Making AI Meaningful Again
Søgaard: Do Language Models Have Semantics?

Monday, September 29 (4:00-5:50pm) The machine will

Computers cannot have a will, because computers don't give a damn. Therefore there can be no machine ethics

The lack of the giving-a-damn-factor is taken by Yann LeCun as a reason to reject the idea that AI might pose an existential risk to humanity – an AI will have no desire for self-preservation “Almost half of CEOs fear A.I. could destroy humanity five to 10 years from now — but ‘A.I. godfather' says an existential threat is ‘preposterously ridiculous’” Fortune, June 15, 2023. See also here.

Implications of the absence of a machine will:

The problem of the singularity (when machines will take over from humans) will not arise
The idea of digital immortality will never be realized Slides
There can be no AI ethics (only: ethics governing human beings when they use AI)

Monday, October 6 (4:00-5:50pm) Are we living in a simulation?

Monday October 13 NO CLASS: FALL BREAK

Monday October 20 (4:00-5:50pm)

An introduction to the statistical foundations of AI

Video

The types of AI

Deterministic AI
Good old fashioned AI (GOFAI)
Basic stochastic AI
How regression works
Advanced stochastic AI
Neural networks and deep learning
Hybrid
Neurosymbolic AI
Background reading: Why machines will never rule the world, chapter 8

Monday October 27 (4:00-5:50pm) AI and World Models

Affordances and the background of Artificial Intelligence
Making AI Meaningful Again

Monday November 3 (4:00-5:50pm) Personal Knowledge

Beverley on the use of ontologies to anhance AI models

Monday November 10 (4:00-5:50pm) AI and science: Why the replication problem is here to stay?

Complex Systems and Cognitive Science: Why the Replication Problem is here to stay

The 'replication problem' is the the inability of scientific communities to independently confirm the results of scientific work. Much has been written on this problem especially as it arises in (social) psychology, and on potential solutions under the heading of 'open science'. But we will see that the replication problem has plagued medicine as a positive science since its beginnings (Virchov and Pasteur). This problem has become worse over the last 30 years and has massive consequences for healthcare practice and policy.

Slides

Is Psychology Finished?

Slides

Background

Reproducibility of Scientific Results, Stanford Encyclopedia of Philosophy, 2018
Science has been in a “replication crisis” for a decade
Irreproducibility Crisis and the Lehman Crash, Barry Smith, Youtube 2020

Monday November 17 (4:00-5:50pm)

Monday November 24 (4:00-5:50pm)

Monday December 1 (4:00-5:50pm) Personal knowledge

Explicit, implicit, practical, personal and tacit knowledge
Video
Knowing how vs Knowing that
Personal knowledge and science
Creativity
Empathy
Entrepreneurship
Leadership and control (and ruling the world)

Monday December 8 (4:00-5:50pm) Compulsory (in-person or synchronous online) oral presentation

Background Material

An Introduction to AI for Philosophers

Video
Slides

(AI experts are invited to criticize what I have to say in this talk)

An Introduction to Philosophy for Computer Scientists

Video
Slides

(Philosophers are invited to criticize what I have to say in this talk)

John McCarthy, "What has AI in common with philosophy?"

and from its Coscompanion volume

Symposium on Why Machines Will Never Rule the World