Ontology and Artificial Intelligence - Fall 2025
Department of Philosophy, University at Buffalo
Fall 2025 - PHI637SEM-SMI2 - Special Topics: Ontology and Artificial Intelligence - Class Number 24371
Faculty: Barry Smith
Hybrid
- in person: Monday 4-5:50pm, 141 Park Hall
- remote synchronous: Monday 4-5:50pm; dial-in details will be supplied by email
- remote asynchronous: dial-in details will be supplied by email; must attend synchronously (either online or in person) on December 8
Grading for 2 Credit Hours Course
- Essay (at least 2000 words): 40%
- Presentation (and accompanying powerpoint deck) on December 8: 40%
- Class Participation (for in person and remote synchronous students) 20%
- Oral exam (for remote asynchronous students) 20%
Essays may include software code and internet portal or database content where relevant.
Students taking this course for 3 credit hours will be required to prepare an additional essay of 3000 words, together with class presentation and powerpoint deck. The total contribution for these two essays is 40%.
Attendance at the synchronous session on December 8, featuring student presentations, is compulsory for all students
Introduction
Ontology (also called 'metaphysics') is a subfield of philosophy which aims to establich the kinds of entities in the world -- including both the material and the mental world -- and the relations between them. Applied ontology applies philosophical ideas and methods to support those who are collecting, using, comparing, refining, evaluating or (today above all) generating data.
Artificial Intelligence (AI) is the subfield of Computer Science devoted to developing programs that enable computers to display behavior that can (broadly) be characterized as 'intelligent'. On the strong version, the ultimate goal of AI is to create what is called General Artificial Intelligence (AGI), by which is meant an artificial system that is as intelligent as a human being. ChatGT and other large language models (LLMs) attempt to generate data from other data, where the latter are obtained for example by crawling the internet.
Required reading
- Why Machines Will Never Rule the World: Artificial Intelligence without Fear (Routledge 2022, revised and enlarged edition published in 2025).
Draft Schedule
Monday, August 25 (4:00-5:50pm) Ontology and the Origins of AI
Since its inception in the last century AI has enjoyed repeated cycles of enthusiasm and disappointment (AI summers and winters). Recent successes of ChatGPT and other Large Language Models (LLMs) have opened a new era of popularization of AI. For the first time, AI tools have been created that are immediately available to the wider population, who for the first time can have real hands-on experience of what AI can do.
In this first lecture we will address the origins of AI in Stanford University in the 1970s and '80s, and specifically in the work on common-sense ontology of Patrick Hayes and others.
Topics to be deal with include:
- What is ontology?
- From Aristotle to 20th century philosophical ontology
- Patrick Hayes, Naive Physics and ontology-based robotics
- Doug Lennat and the CYC (for 'enCYClopedia' project)
- Why CYC failed
- Why ontology is still important to AI
Background:
Monday, September 1 NO CLASS: LABOR DAY
Monday, September 8 (4:00-5:50pm) Natural and Artificial Intelligence
The classical psychological definitions of intelligence are:
- A. the ability to adapt to new situations (applies both to humans and to animals)
- B. a very general mental capability (possessed only by humans) that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience
Can a machine be intelligent in either of these senses?
What do IQ tests measure:
Readings:
- Linda S. Gottfredson. Mainstream Science on Intelligence. In: Intelligence 24 (1997), pp. 13–23.
- Jobst Landgrebe and Barry Smith: There is no Artificial General Intelligence
Background: Ersatz Definitions, Anthropomorphisms, and Pareidolia
- There's no 'I' in 'AI', Steven Pemberton, Amsterdam, December 12, 2024
- 1. Esatz definitions: using words like 'thinks' as in 'the machine is thinking', but with meanings quite different from those we use when talking about human beings. As when we define 'flying' as moving through the air, and then jumping up and down and saying "look, I'm flying!"
- 2. Pareidolia: a psychological phenomenon that causes people to see patterns, objects, or meaning in ambiguous or unrelated stimuli
- 3. If you can't spot irony, you're not intelligent
Monday, September 15 (4:00-5:50pm) Limits of AI?
1. Surveys the technical fundamentals of AI: Methods, mathematics, usage
2. Natural and engineered systems
3. The ontology of systems
4. Complex systems
5. The limits of Turing machines
6. Why AI cannot model complex systems adequately and synoptically, and why they therefore cannot reach a level of intelligence equal to that of human beings.
Conclusions:
- AI is a family of algorithms to automate repetitive events
- Deep neural networks have nothing to do with neurons
- AI is not artificial 'intelligence'; it is a branch of mathematics in which the attempt is made to use the Turing machine to its limits by using gigantically large amounts of data
Monday, September 22 (4:30 - 16:15) Transhumanism, digital immortality and the Fermi paradox
1. Surveys the full spectrum of transhumanism and its cultural origins.
2. Debunks the feasibility of radically improving human beings via technology.
3. Explains why Sam Altman and other AI gods are so passionate about creating Artificial General Intelligence
Background:
AI and the meaning of life:
- AI and The Matrix
- There is no general AI
- Landgrebe on Transhumanism
- Considering the existential risk of Artificial Superintelligence
- Scott Adams: We are living in a simulation
Ontology of the Eruv (why it would take all the fun out of real estate if everyone could live next door to John Lennon)
Are we living in a simulation?
Background:
- David Chalmers' Reality+
- Scott Adams: We are living in a simulation
- AI and The Matrix
- Slides
- Are we living in a simulation?
- On Chalmers on Reality+?
- The Future of Artificial Intelligence
Monday, September 22 (4:00-5:50pm) Can a machine be conscious?
Machines cannot have intentionality; they cannot have experiences which are about something.
Background
- Slides
- Video
- Searle's Chinese Room Argument
- Searle: Minds, Brains, and Programs
- Making AI Meaningful Again
- Søgaard: Do Language Models Have Semantics?
Monday, September 29 (4:00-5:50pm) The machine will
Computers cannot have a will, because computers don't give a damn. Therefore there can be no machine ethics
- The lack of the giving-a-damn-factor is taken by Yann LeCun as a reason to reject the idea that AI might pose an existential risk to humanity – an AI will have no desire for self-preservation “Almost half of CEOs fear A.I. could destroy humanity five to 10 years from now — but ‘A.I. godfather' says an existential threat is ‘preposterously ridiculous’” Fortune, June 15, 2023. See also here.
Implications of the absence of a machine will:
- The problem of the singularity (when machines will take over from humans) will not arise
- The idea of digital immortality will never be realized Slides
- There can be no AI ethics (only: ethics governing human beings when they use AI)
Monday, October 6 (4:00-5:50pm) Use of ontologies to support Large Language Models
- Explainability: Ontologies make AI decision-making processes more transparent and interpretable. By providing a clear, logical structure of knowledge, they allow for tracing the reasoning behind some AI decisions.
- Consistency: They help to foster logical consistency across AI systems, reducing errors and contradictions. This is particularly crucial in complex domains where maintaining coherence is challenging.
- Interoperability: Ontologies help to foster seamless integration of knowledge from various sources and domains. This interoperability is essential for creating comprehensive AI systems that can reason across multiple areas of expertise.
- Semantic Richness: Ontologies capture nuanced relationships and constraints that go beyond simple hierarchical structures, allowing for more sophisticated reasoning.
- Domain Expertise Encoding: They provide a means to formally encode human expert knowledge, to some extent bridging the gap between human understanding and machine processing.
Adapted from [1] Riley (October 2024), with caveats added in 'bold face.
Monday October 13 NO CLASS: FALL BREAK
Monday October 20 (4:00-5:50pm)
An introduction to the statistical foundations of AI
The types of AI
- Deterministic AI
- Good old fashioned AI (GOFAI)
- Basic stochastic AI
- How regression works
- Advanced stochastic AI
- Neural networks and deep learning
- Hybrid
- Neurosymbolic AI
- Background reading: Why machines will never rule the world, chapter 8
Monday October 27 (4:00-5:50pm) AI and World Models
- Affordances and the background of Artificial Intelligence
- Making AI Meaningful Again
Monday November 3 (4:00-5:50pm) Ontology-based AI Enhancement Strategies: Part 1
Featuring John Beverley
Combining the semantically rich architectures provided by ontologies and knowledge graphs with the generative strengths of LLMs promises to provide a path towards more explainable artificial intelligence systems, more trustworthy output, and a deeper understanding of vulnerabilities arising from integrated architectures
Introduction
- Ontologies and Knowledge Graphs
- Strategies for leveraging ontologies and knowledge graphs to enhance the capabilities of Large Language Models and address their limitations.
Monday November 10 (4:00-5:50pm) Ontology-based AI Enhancement Strategies: Part 2
Outlines the current landscape of ontology-based AI enhancement strategies, highlighting what goes well and what goes poorly, and why ontology engineering is necessary.
Background
Monday November 17 (4:00-5:50pm) AI and science: Why the replication problem is here to stay
Complex Systems and Cognitive Science: Why the Replication Problem is here to stay
The 'replication problem' is the the inability of scientific communities to independently confirm the results of scientific work. Much has been written on this problem especially as it arises in (social) psychology, and on potential solutions under the heading of 'open science'. But we will see that the replication problem has plagued medicine as a positive science since its beginnings (Virchov and Pasteur). This problem has become worse over the last 30 years and has massive consequences for healthcare practice and policy.
Background
- The replication problems which arise when AI applied in scientific research
- Is Psychology Finished?
- Reproducibility of Scientific Results, Stanford Encyclopedia of Philosophy, 2018
- Science has been in a “replication crisis” for a decade
- Irreproducibility Crisis and the Lehman Crash, Barry Smith, Youtube 2020
Monday November 24 (4:00-5:50pm) TBD
Monday December 1 (4:00-5:50pm) Personal knowledge
- Knowing how vs Knowing that
- Personal knowledge and science
- Creativity
- Empathy
- Entrepreneurship
- Leadership and control (and ruling the world)
Background
Monday December 8 (4:00-5:50pm) Oral presentations (Compulsory for all students)
Background Material
An Introduction to AI for Philosophers
An Introduction to Philosophy for Computer Scientists
John McCarthy, "What has AI in common with philosophy?"
Companion volume to Why Machines Will Never Rule the World
Podcasts and interviews on Why Machines Will Never Rule the World