Ontology and Artificial Intelligence - Fall 2025

From NCOR Wiki
Jump to navigationJump to search

Department of Philosophy, University at Buffalo

Fall 2025 - PHI609SEM-SMI2 - Special Topics: Ontology and Artificial Intelligence - Class Number 24371

Faculty: Barry Smith

Hybrid

in person: Monday 4-5:50pm, 141 Park Hall
remote synchronous: Monday 4-5:50pm; dial-in details will be supplied by email
remote asynchronous: dial-in details will be supplied by email; must attend synchronously (either online or in person) on December 8

Grading All enrolled students must submit to BS a Starting Draft version of their essay by November 10 at the latest. They must must a full version of their essay and of the associated powerpoint deck by December 8.

Word length requirements are as follows:

PhD candidates:
2 credit hours: 2000 words / starting draft: 1000 words
3 credit hours: 2000 + 3000 words / starting draft: 1000 + 1000 words
Masters candidates:
2 credit hours: 1500 words /starting draft: 750 words
3 credit hours: 1500 + 2000 words / starting draft: 750 + 750 words
Undergraduate candidates
2 credit hours: 1000 words / starting draft: 500 words
3 credit hours: 1500 words / 500 + 500 words

3 credit hour candidates may submit a single essay provided it its length conforms to the combined limits listed above.

Starting draft should be your own work. No use of LLMs. All candidates are, however, welcome to use ChatGPT on polishing their starting drafts, providing that they follow the rules set forth here:

Grading for 2 Credit Hours Course (PhD candidates)

Essay (at least 2000 words): 40%
Presentation (and accompanying powerpoint deck) on December 8: 40%
Class Participation (for in person and remote synchronous students) 20%
Oral exam (for remote asynchronous students) 20%

Essays may include software code and internet portal or database content where relevant.

Students taking this course for 3 credit hours will be required to prepare an additional essay of 3000 words, together with class presentation and powerpoint deck. The total contribution for these two essays is 40%.

Policy on use of AI

There are two options:

Option 1: Include a declaration on p. 1 to the effect that the essay was written entirely without any sort of AI assistance. I reserve the right to use software tools, but also my own judgment, to ensure this draft was written by you. Grades under option 1. will be determined by the quality of your essay.

Option 2 is in three steps:

Step 1. Create a draft in your own words of an essay that is about half as long as your target length length. This should be a substantive draft, but it can contain for example rough notes pointing to further lines of development. Not only this initial draft, but also all further steps in the list below, should rely on study by you of the relevant literature. Both your draft and your final essay should accordingly contain lists of references.
Step 2. Submit this draft to me at phismith@buffalo.edu by the middle of the semester.
Step 3. You create a new prompt using your draft as an attachment with an instruction such as: show me how I can improve the attached. This will start a potentially long process of improvements in your essay incorporating further contributions from you together with assistance from the LLM. You should attempt to use prompts to manipulate the style of the LLM output in a direction of a style appropriate to serious academic research, with references, quotations, definitions, as needed. Most importantly: you should be aware that LLMs often make errors (called 'hallucinations'), for example inventing references in the literature which do not in fact exist.
Step 4. the LLM has been keeping track of everything you tell it to do since you started the newchat. When you think you might be ready to submit, use the LLM save function to generate a URI linking to all the interactions thus far – effectively a log of your process. This log, together with your initial and final essay will for part of what will be evaluated for your grade.
Step 5. When you truly are ready to submit, press save for one last time and take a note of the link; send me this link, together with your completed essay, and with any notes on features of the log you which to point out -- for example requests that I ignore specific chains of prompts because they proved to be dead ends.

Grades under Option 2 will be determined on the basis of (a) originality of the initial draft, (b) creativity of your prompts, (c) quality of final essay.

Attendance at the synchronous session on December 8, featuring student presentations, is compulsory for all students

Introduction

Ontology (also called 'metaphysics') is a subfield of philosophy which aims to establich the kinds of entities in the world -- including both the material and the mental world -- and the relations between them. Applied ontology applies philosophical ideas and methods to support those who are collecting, using, comparing, refining, evaluating or (today above all) generating data.

Artificial Intelligence (AI) is the subfield of Computer Science devoted to developing programs that enable computers to display behavior that can (broadly) be characterized as 'intelligent'. On the strong version, the ultimate goal of AI is to create what is called General Artificial Intelligence (AGI), by which is meant an artificial system that is as intelligent as a human being. ChatGT and other large language models (LLMs) attempt to generate data from other data, where the latter are obtained for example by crawling the internet.

Required reading

Why Machines Will Never Rule the World: Artificial Intelligence without Fear (Routledge 2022; revised and enlarged 2nd edition published in 2025).

See also offer here

Draft Schedule

Monday, August 25 (4:00-5:50pm) The Glory and the Misery of Large Language Models

We will provide a brief introduction to Large Language Models such as ChatGPT. Focusing not only on positive but also on negative aspects of how they work.

Video1
Slides1
Transcript

GPT-5 and the French and Indian War:

Video2
Slides2
Summary of the argument of Why Machines Will Never Rule the World
What does 'stochastic' mean in 'stochastic AI'
What is 'scaling'
What are hallucinations?
Teach yourself history with ChatGPT

Monday, September 1 NO CLASS: LABOR DAY

Monday, September 8 (4:00-5:50pm) Ontology and the History of AI

Part 1: From Good Old Fashioned (Logical, Symbolic) AI to ChatGPT

Since its inception in the last century, AI has enjoyed repeated cycles of enthusiasm and disappointment (AI summers and winters). Recent successes of ChatGPT and other Large Language Models (LLMs) have opened a new era of popularization of AI. For the first time, AI tools have been created that are immediately available to the wider population, who for the first time can have real hands-on experience of what AI can do.

In this first lecture we will address the origins of AI in Stanford University in the 1970s and '80s, and specifically in the work on common-sense ontology of Patrick Hayes and others.

Topics to be deal with include:

What is ontology?
From Aristotle to 20th century philosophical ontology
Patrick Hayes, Naive Physics and ontology-based robotics
Doug Lennat and the CYC (for 'enCYClopedia' project)
Why CYC failed
Why ontology is still important to AI

Background:

History of AI
Where do ontologies come from?
See also references to Hayes in Everything must go

Monday, September 15 (4:00-5:50pm) Limits of AI?

Slides
Video

1. Surveys the technical fundamentals of AI: Methods, mathematics, usage

2. Natural and engineered systems

3. The ontology of systems

4. Complex systems

5. The limits of Turing machines

6. Why AI cannot model complex systems adequately and synoptically, and why they therefore cannot reach a level of intelligence equal to that of human beings.

Conclusions:

AI is a family of algorithms to automate repetitive events
Deep neural networks have nothing to do with neurons
AI is not artificial 'intelligence'; it is a branch of mathematics in which the attempt is made to use the Turing machine to its limits by using gigantically large amounts of data

Background reading:

Marcus on superintelligence
https://www.wheresyoured.at/
https://x.com/jobstlandgrebe?lang=en
https://ontology.buffalo.edu/smith/

Monday, September 22 (4:30 - 16:15) Machine Consciounsess, Transhumanism, and Ecological Psychology

1. Jobst Landgrebe on mathematical definitions of consciousness

2. Surveys the spectrum of transhumanism

3. Debunks the feasibility of radically improving human beings via technology.

4. Explains why Sam Altman and other AI gods are so passionate about creating Artificial General Intelligence

5. J. J. Gibson, direct realism, and how our behavior is tuned to affordances

Video
Slides

Background:

TESCREALISM
Transhumanism and the Mind-Body Problem

AI and the meaning of life:

AI and The Matrix
There is no general AI
Landgrebe on Transhumanism
Considering the existential risk of Artificial Superintelligence
Scott Adams: We are living in a simulation

Ontology of the Eruv (why it would take all the fun out of real estate if everyone could live next door to John Lennon)

Are we living in a simulation?

David Chalmers' Reality+
Scott Adams: We are living in a simulation
AI and The Matrix
Slides
Are we living in a simulation?
On Chalmers on Reality+?
The Future of Artificial Intelligence

Machine consciousness: Machines cannot have intentionality; they cannot have experiences which are about something.

Background

Slides
Video
Searle's Chinese Room Argument
Searle: Minds, Brains, and Programs
Making AI Meaningful Again
Søgaard: Do Language Models Have Semantics?

Monday, September 29 (4:00-5:50pm) AGI, Behavior Settings and Distributed Cognition

Video
Slides

Part 1. Question-and-answer session with Jérémy Ravenel of naas.ai

Questions to be addressed include:

What are you doing with BFO and LLMs?
Can you rely on BFO still being operative in the proper way even after a new release of an LLM?

See also: Why is BFO so powerful?

Part 2. Niches and Intelligence

Knowing how vs Knowing that
Personal knowledge and science
Creativity
Empathy
Entrepreneurship
Leadership and control (and ruling the world)

Background

Explicit, implicit, practical, personal and tacit knowledge
Personal knowledge

Monday, October 6 (4:00-5:50pm) Towards a theory of intelligence

Slides
Video

Part 1. Definitions of intelligence

A. the ability to adapt to new situations (applies both to humans and to animals) 
B. a very general mental capability (possessed only by humans) that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience 

Can a machine be intelligent in either of these senses?

Can a team be intelligent?

See Ryan Muldoon, "Diversity and the Division of Cognitive Labor", Philosophy Compass 8 (2):117-125 (2013)

Can a team made of humans and AI systems be intelligent?

See M. Stelmaszak et al., "Artificial Intelligence as an Organizing Capability Arising from Human-Algorithm Relations", Journal of Management Studies, https://doi.org/10.1111/joms.70003

Part 2. What do IQ tests measure?

Slides on IQ tests
Human and animal intelligence

Readings:

Linda S. Gottfredson. Mainstream Science on Intelligence. In: Intelligence 24 (1997), pp. 13–23.
Jobst Landgrebe and Barry Smith: There is no Artificial General Intelligence

The context-dependence of human intelligence, and why AGI is impossible

Part 3. Affordances, tacit knowledge, cognitive niches, and the background of Artificial Intelligence

Background:

Harry Heft, Ecological Psychology in Context
There's no 'I' in 'AI', Steven Pemberton, Amsterdam, December 12, 2024
1. Esatz definitions: using words like 'thinks' as in 'the machine is thinking', but with meanings quite different from those we use when talking about human beings. As when we define 'flying' as moving through the air, and then jumping up and down and saying "look, I'm flying!"
2. Pareidolia: a psychological phenomenon that causes people to see patterns, objects, or meaning in ambiguous or unrelated stimuli
3. If you can't spot irony, you're not intelligent

Monday October 13 NO CLASS: FALL BREAK

Monday October 20 (4:00-5:50pm) The Free Will Problem and the Problem of the Machine Will

Video
Slides

Computers cannot have a will, because computers don't give a damn. Therefore there can be no machine ethics

The lack of the giving-a-damn-factor is taken by Yann LeCun as a reason to reject the idea that AI might pose an existential risk to humanity – an AI will have no desire for self-preservation “Almost half of CEOs fear A.I. could destroy humanity five to 10 years from now — but ‘A.I. godfather' says an existential threat is ‘preposterously ridiculous’” Fortune, June 15, 2023. See also here.

Implications of the absence of a machine will:

The problem of the singularity (when machines will take over from humans) will not arise
The idea of digital immortality will never be realized Slides
There can be no AI ethics (only: ethics governing human beings when they use AI)

What is the basis of ethics as applied to humans?

Raymond Tallis: Freedom: An Impossible Reality
Slides
Video
Utilitarianism
Value ethics
Responsiblity

No responsibility without objectifying intelligence

On what basis should we build an AI ethics?

On why AI ethics is (a) impossible, (b) unnecessary

Readings:

Moor: Four kinds of ethical robots
Jobst Landgrebe and Barry Smith: No AI Ethics
Crane: The AI Ethics Hoax

Monday October 27 (4:00-5:50pm) The Ontology of Consciousness

Slides Video

Learning outcomes
John Searle
On consciousness: the Chinese Room Argument
Searle and Smith
Neuroscience and consciousness
Anil Seth, Being You: A New Science of Consciousness
Making AI meaningful again
Raymond Tallis, Why the Mind is not a Computer
Raymond Tallis: The Explicit Animal: A Defence of Human Consciousness

Monday November 3 (4:00-5:50pm) Debates on ontology engineering: Part 1

Featuring John Beverley

Video
Slides
Transcription

Debating the following motions:

Philosophy is irrelevant to ontology engineering
The use-mention confusion
Mappings merely give extra life to bad ontologies
AI fear is justified
BFO is too slow to react
Knowledge graphs cannot prevent hallucinations
There can never be AGI

Background

Strategies for leveraging ontologies and knowledge graphs to enhance the capabilities of Large Language Models and address their limitations.

The Ontological Foundation: A Cornerstone for Trustworthy AI with caveats added in bold face

  • Explainability: Ontologies make AI decision-making processes more transparent and interpretable. By providing a clear, logical structure of knowledge, they allow for tracing the reasoning behind some AI decisions.
  • Consistency: They help to foster logical consistency across AI systems, reducing errors and contradictions. This is particularly crucial in complex domains where maintaining coherence is challenging.
  • Interoperability: Ontologies help to foster seamless integration of knowledge from various sources and domains. This interoperability is essential for creating comprehensive AI systems that can reason across multiple areas of expertise.
  • Semantic Richness: Ontologies capture nuanced relationships and constraints that go beyond simple hierarchical structures, allowing for more sophisticated reasoning.
  • Domain Expertise Encoding: They provide a means to formally encode human expert knowledge, to some extent bridging the gap between human understanding and machine processing.


An introduction to the statistical foundations of AI

Video

The types of AI

Deterministic AI
Good old fashioned AI (GOFAI)
Basic stochastic AI
How regression works
Advanced stochastic AI
Neural networks and deep learning
Hybrid
Neurosymbolic AI
Background reading: Why machines will never rule the world, 1e chapter 8, 2e chapter 9

Monday November 10 (4:00-5:50pm) Debates on ontology engineering: Part 2

Video
Slides
Transcription
Will combining the semantically rich architectures provided by ontologies and knowledge graphs with the generative strengths of LLMs provide a path towards more explainable artificial intelligence systems, more trustworthy output, and a deeper understanding of vulnerabilities arising from integrated architectures?
The idea of digital immortality is idiotic
We should allow AI research to proceed unregulated
Even if you think AGI is impossible, you should treat robots at certain levels of sophistication as moral agents
'OWL semantics' have nothing to do with the semantics of ordinary language
AI will take away our jobs
There will never be driverless cars
Science is not ready for software, let alone AI

Outlines the current landscape of ontology-based AI enhancement strategies, highlighting what goes well and what goes poorly, and why ontology engineering is necessary.

Background

Ontological Assumptions in AI Outputs

November 10 is the deadline for submission to BS of starting drafts for your essays

PhD candidates:

2 credit hours: 2000 words / starting draft: 1000 words
3 credit hours: 2000 + 3000 words / 1000 + 1000 words

Masters candidates:

2 credit hours: 1500 words /starting draft: 750 words
3 credit hours: 1500 + 2000 words / 750 + 750 words

Undergraduate candidates

2 credit hours: 1000 words / starting draft: 500 words
3 credit hours: 1500 words / 500 + 500 words

Wednesday, November 19 (10:00 - 11:50am) On Hallucinations and Political Correctness

This will be a lecture by Jobst Landgrebe on:

Why machines will never stop hallucinating

In current-day culture, concerns are raised when LLMs responds with symbol or pixel sequences which are seen as deviating from social norms of political correctness or wokeness -- or in other words, when they say the unsayable. Further problems are riased for LLM technology by the inconvenient fact of hallucinations, since this prevents their usage for task automation. LLM architects and engineers try to prevent both types of events. This talk shows why it is impossible to ensure that LLMs do not hallucinate or speak the unspeakable, drawing on arguments from the theory of computation (Turing decision/Rice theorem, Gödel's First Incompleteness Theorem).  

Literature:

Glukhov et. al 2023, LLM Censorship: A Machine Learning Challenge or a Computer Security Problem?

Banerjee et al. 2024, LLMs Will Always Hallucinate, and We Need to Live With This

Monday November 24 (4:00-5:50pm) Why the Replication Problem is here to stay

Jobst Landgrebe on the Replication Crisis in AI

https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2024&action=edit&section=8

The 'replication problem' is the the inability of scientific communities to independently confirm the results of scientific work. Much has been written on this problem especially as it arises in (social) psychology, and on potential solutions under the heading of 'open science'. But we will see that the replication problem has plagued medicine as a positive science since its beginnings (Virchov and Pasteur). This problem has become worse over the last 30 years and has massive consequences for healthcare practice and policy.

Slides
The replication problems which arise when AI applied in scientific research
Is Psychology Finished?
Reproducibility of Scientific Results, Stanford Encyclopedia of Philosophy, 2018
Science has been in a “replication crisis” for a decade
Irreproducibility Crisis and the Lehman Crash, Barry Smith, Youtube 2020
Bayer tested some findings and only achieved a 21% replication rate for biomedical studies


The Ontological Foundation: A Cornerstone for Trustworthy AI, October 2024, with caveats added in 'bold face.

  • Explainability: Ontologies make AI decision-making processes more transparent and interpretable. By providing a clear, logical structure of knowledge, they allow for tracing the reasoning behind some AI decisions.
  • Consistency: They help to foster logical consistency across AI systems, reducing errors and contradictions. This is particularly crucial in complex domains where maintaining coherence is challenging.
  • Interoperability: Ontologies help to foster seamless integration of knowledge from various sources and domains. This interoperability is essential for creating comprehensive AI systems that can reason across multiple areas of expertise.
  • Semantic Richness: Ontologies capture nuanced relationships and constraints that go beyond simple hierarchical structures, allowing for more sophisticated reasoning.
  • Domain Expertise Encoding: They provide a means to formally encode human expert knowledge, to some extent bridging the gap between human understanding and machine processing.

Complex Systems and Cognitive Science: Why the Replication Problem is here to stay

The 'replication problem' is the the inability of scientific communities to independently confirm the results of scientific work. Much has been written on this problem especially as it arises in (social) psychology, and on potential solutions under the heading of 'open science'. But we will see that the replication problem has plagued medicine as a positive science since its beginnings (Virchov and Pasteur). This problem has become worse over the last 30 years and has massive consequences for healthcare practice and policy.

Slides

Background:

Reproducibility of Scientific Results, Stanford Encyclopedia of Philosophy, 2018
Science has been in a “replication crisis” for a decade
Irreproducibility Crisis and the Lehman Crash, Barry Smith, Youtube 2020

Monday December 1 (4:00-5:50pm) TBD

Jobst Landgrebe: Why we cannot create intelligence inside a machine

Monday December 8 (4:00-5:50pm) Oral presentations (Compulsory for all students)

Background Material

An Introduction to AI for Philosophers

Why not robot cops? Video
Why not robot cops? Slides

An Introduction to Philosophy for Computer Scientists

Video
Slides

John McCarthy, "What has AI in common with philosophy?"

Companion volume to Why Machines Will Never Rule the World

Podcasts and interviews on Why Machines Will Never Rule the World

Student Learning Outcomes

1. Comprehend the Architecture and Operation of Large Language Models: Explain the basic design and functioning of Large Language Models (LLMs) such as ChatGPT. Define and use correctly key terms

2. Evaluate the Theoretical and Practical Limits of AI: Explain the limitations of AI systems as applications of Turing-computable mathematics. Critically assess claims about Artificial General Intelligence (AGI) and the “singularity.”

3. Examine Theories of Machine Consciousness, Transhumanism, and Simulation: Explain why machines lack intentionality and subjective experience.

4. Understand Ethical and Normative Dimensions of AI: Explain why AI systems cannot possess will, intention, or moral responsibility, and differentiate between AI ethics and ethics of AI use.

5. Apply Ontology-Based Strategies for AI Enhancement: Explain how ontologies and knowledge graphs can improve the explainability, consistency, and interoperability of AI systems. Identify strengths and weaknesses of ontology-based and neurosymbolic AI approaches.