Ontology and Artificial Intelligence - Fall 2025: Difference between revisions

From NCOR Wiki
Jump to navigationJump to search
mNo edit summary
 
(122 intermediate revisions by the same user not shown)
Line 1: Line 1:
Department of Philosophy, University at Buffalo
Department of Philosophy, University at Buffalo [[Ontology and AI]]


Fall 2025 - PHI609SEM-SMI2 - Special Topics: Ontology and Artificial Intelligence - Class Number 24371
Fall 2025 - PHI609SEM-SMI2 - Special Topics: Ontology and Artificial Intelligence - Class Number 24371
Line 7: Line 7:
'''Hybrid'''
'''Hybrid'''
: in person: Monday 4-5:50pm, 141 Park Hall
: in person: Monday 4-5:50pm, 141 Park Hall
: remote synchronous: Monday 4-5:50pm; dial-in details will be supplied by email
: remote synchronous: Monday 4-5:50pm; dial-in details will be supplied by emaail
: remote asynchronous: dial-in details will be supplied by email; must attend synchronously (either online or in person) on December 8
: remote asynchronous: dial-in details will be supplied by email; must attend synchronously (either online or in person) on December 8


'''Grading for 2 Credit Hours Course'''
'''Grading'''
All enrolled students must submit to BS a Starting Draft version of their essay by November 10 at the latest. They must must a full version of their essay and of the associated powerpoint deck by December 8.
 
Word length requirements are as follows:
 
:PhD candidates:
::2 credit hours: 2000 words / starting draft: 1000 words
::3 credit hours: 2000 + 3000 words / starting draft: 1000 + 1000 words
:Masters candidates:
::2 credit hours: 1500 words /starting draft: 750 words
::3 credit hours: 1500 + 2000 words / starting draft: 750 + 750 words
:Undergraduate candidates
::2 credit hours: 1000 words / starting draft: 500 words
::3 credit hours: 1500 words / 500 + 500 words
 
3 credit hour candidates may submit a single essay provided it its length conforms to the combined limits listed above.
 
Starting draft should be your own work. No use of LLMs. All candidates are, however, welcome to use ChatGPT on polishing their starting drafts, providing that they follow the rules set forth here:
 
'''Grading for 2 Credit Hours Course (PhD candidates)'''
:Essay (at least 2000 words): 40%
:Essay (at least 2000 words): 40%
:Presentation (and accompanying powerpoint deck) on December 8: 40%
:Presentation (and accompanying powerpoint deck) on December 8: 40%
Line 19: Line 38:


Students taking this course for 3 credit hours will be required to prepare an additional essay of 3000 words, together with class presentation and powerpoint deck. The total contribution for these two essays is 40%.
Students taking this course for 3 credit hours will be required to prepare an additional essay of 3000 words, together with class presentation and powerpoint deck. The total contribution for these two essays is 40%.
'''Policy on use of AI'''
There are two options:
Option 1: Include a declaration on p. 1 to the effect that the essay was written entirely without
any sort of AI assistance. I reserve the right to use software tools, but also my own judgment, to
ensure this draft was written by you. Grades under option 1. will be determined by the quality
of your essay.
Option 2 is in three steps:
:Step 1. Create a draft in your own words of an essay that is about half as long as your target length length. This should be a substantive draft, but it can contain for example rough notes pointing to further lines of development. Not only this initial draft, but also all further steps in the list below, should rely on study by you of the relevant literature. Both your draft and your final essay should accordingly contain lists of references.
:Step 2. Submit this draft to me at phismith@buffalo.edu by the middle of the semester.
:Step 3. You create a new prompt using your draft as an attachment with an instruction such as: ''show me how I can improve the attached''. This will start a potentially long process of improvements in your essay incorporating further contributions from you together with assistance from the LLM. You should attempt to use prompts to manipulate the style of the LLM output in a direction of a style appropriate to serious academic research, with references, quotations, definitions, as needed. Most importantly: you should be aware that LLMs often make errors (called 'hallucinations'), for example inventing references in the literature which do not in fact exist.
:Step 4. the LLM has been keeping track of everything you tell it to do since you started the newchat. When you think you might be ready to submit, use the LLM save function to generate a URI linking to all the interactions thus far – effectively a log of your process. This log, together with your initial and final essay will for part of what will be evaluated for your grade.
:Step 5. When you truly are ready to submit, press save for one last time and take a note of the link; send me this link, together with your completed essay, and with any notes on features of the log you which to point out -- for example requests that I ignore specific chains of prompts because they proved to be dead ends.
Grades under Option 2 will be determined on the basis of (a) originality of the initial draft, (b) creativity of your prompts, (c) quality of final essay.


Attendance at the synchronous session on December 8, featuring student presentations, is compulsory for all students
Attendance at the synchronous session on December 8, featuring student presentations, is compulsory for all students
Line 39: Line 75:


We will provide a brief introduction to Large Language Models such as ChatGPT. Focusing not only on positive but also on negative aspects of how they work.
We will provide a brief introduction to Large Language Models such as ChatGPT. Focusing not only on positive but also on negative aspects of how they work.
:[https://www.youtube.com/watch?v=JMD_1yA3TXk Video1]
:[https://buffalo.box.com/v/Glory-and-Misery Slides1]
:[https://buffalo.box.com/s/ufnf1gwozzzd3hpmcmmbz2j7l0dzht56 Transcript]
GPT-5 and the French and Indian War:
:[https://www.youtube.com/watch?v=Lm4mCgAsI6I Video2]
:[https://buffalo.box.com/v/French-and-Indian-War Slides2]


:Summary of the argument of ''Why Machines Will Never Rule the World''
:Summary of the argument of ''Why Machines Will Never Rule the World''
Line 45: Line 93:


:What is 'scaling'
:What is 'scaling'
:What are 'foundation models'
:Gary Marcus and 'enshitiffication'


:What are hallucinations?
:What are hallucinations?
Line 59: Line 103:


Part 1: From Good Old Fashioned (Logical, Symbolic) AI to ChatGPT
Part 1: From Good Old Fashioned (Logical, Symbolic) AI to ChatGPT
*[https://buffalo.box.com/v/Ontology-history Lecture]
*[https://buffalo.box.com/v/Slides-Lecture-2 Slides]
*[https://www.youtube.com/watch?v=zFyGDBtbVdc Short Youtube video on how ChatGPT fakes quotations]


Since its inception in the last century, AI has enjoyed repeated cycles of enthusiasm and disappointment (AI summers and winters). Recent successes of ChatGPT and other Large Language Models (LLMs) have opened a new era of popularization of AI. For the first time, AI tools have been created that are immediately available to the wider population, who for the first time can have real hands-on experience of what AI can do.  
Since its inception in the last century, AI has enjoyed repeated cycles of enthusiasm and disappointment (AI summers and winters). Recent successes of ChatGPT and other Large Language Models (LLMs) have opened a new era of popularization of AI. For the first time, AI tools have been created that are immediately available to the wider population, who for the first time can have real hands-on experience of what AI can do.  
Line 75: Line 123:
Background:
Background:
:[https://buffalo.box.com/v/History-of-AI History of AI]
:[https://buffalo.box.com/v/History-of-AI History of AI]
:[https://utt.hal.science/hal-02954862v1/document Where do ontologies come from?]
:See also references to Hayes in [https://www.physicalism.com/osr.pdf ''Everything must go'']


Part 2: Human and Machine Intelligence
==Monday, September 15 (4:00-5:50pm) Limits of AI? ==
 
The classical psychological definitions of intelligence are:  
 
:A. the ability to adapt to new situations (applies both to humans and to animals) 
:B. a very general mental capability (possessed only by humans) that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience 


Can a machine be intelligent in either of these senses?
:[https://buffalo.box.com/v/Ontology-and-AI-Slides3 Slides]
 
:[https://buffalo.box.com/v/Ontology-and-AI-Video3 Video]
What do IQ tests measure:
 
:[https://buffalo.box.com/v/What-do-IQ-tests-2022 Slides on IQ tests]
 
:[https://www.youtube.com/watch?v=BcyeAbcDDgg Human and animal intelligence]
 
'''Readings:'''
 
:Linda S. Gottfredson. [https://www1.udel.edu/educ/gottfredson/reprints/1994WSJmainstream.pdf Mainstream Science on Intelligence]. In: ''Intelligence'' 24 (1997), pp. 13–23.
:Jobst Landgrebe and Barry Smith: [https://arxiv.org/pdf/1906.05833.pdf There is no Artificial General Intelligence]
 
'''Background''': Ersatz Definitions, Anthropomorphisms, and Pareidolia
 
:[https://www.youtube.com/watch?v=lS4-QSR1sNk There's no 'I' in 'AI'], Steven Pemberton, Amsterdam, December 12, 2024
::1. Esatz definitions: using words like 'thinks' as in 'the machine is thinking', but with meanings quite different from those we use when talking about human beings. As when we define 'flying' as moving through the air, and then jumping up and down and saying "look, I'm flying!"
::2. Pareidolia: a psychological phenomenon that causes people to see patterns, objects, or meaning in ambiguous or unrelated stimuli
::3. If you can't spot irony, you're not intelligent
 
==Monday, September 15 (4:00-5:50pm) Limits of AI? ==


1. Surveys the technical fundamentals of AI: Methods, mathematics, usage  
1. Surveys the technical fundamentals of AI: Methods, mathematics, usage  
Line 122: Line 148:
:AI is not artificial 'intelligence'; it is a branch of mathematics in which the attempt is made to use the Turing machine to its limits by using gigantically large amounts of data
:AI is not artificial 'intelligence'; it is a branch of mathematics in which the attempt is made to use the Turing machine to its limits by using gigantically large amounts of data


==Monday, September 22 (4:30 - 16:15) Transhumanism, digital immortality and the Fermi paradox==
Background reading:
:[https://www.nytimes.com/2025/09/03/opinion/ai-gpt5-rethinking.html?unlocked_article_code=1.jE8.eAwg.I1yx07GQmDbh&smid=nytcore-ios-share&referringSource=articleShare&utm_source=substack&utm_medium=email Marcus on superintelligence]
:https://www.wheresyoured.at/
:https://x.com/jobstlandgrebe?lang=en
:https://ontology.buffalo.edu/smith/


1. Surveys the full spectrum of transhumanism and its cultural origins.
==Monday, September 22 (4:30 - 16:15) Machine Consciounsess, Transhumanism, and Ecological Psychology==


2. Debunks the feasibility of radically improving human beings via technology.
1. Jobst Landgrebe on mathematical definitions of consciousness


3. Explains why Sam Altman and other AI gods are so passionate about creating Artificial General Intelligence
2. Surveys the spectrum of transhumanism
 
3. Debunks the feasibility of radically improving human beings via technology.
 
4. Explains why Sam Altman and other AI gods are so passionate about creating Artificial General Intelligence
 
5. J. J. Gibson, direct realism, and how our behavior is tuned to affordances
 
:[https://buffalo.box.com/v/Ontology-and-AI-4-video Video]
:[https://buffalo.box.com/v/Ontology-and-AI-4-Slides Slides]


Background:  
Background:  
Line 144: Line 183:
Are we living in a simulation?
Are we living in a simulation?


Background:
:David Chalmers' ''[https://www.amazon.com/Reality-Virtual-Worlds-Problems-Philosophy/dp/0393635805 Reality+]''  
:David Chalmers' ''[https://www.amazon.com/Reality-Virtual-Worlds-Problems-Philosophy/dp/0393635805 Reality+]''  
:[https://buffalo.box.com/v/We-are-living-in-a-simulation Scott Adams: We are living in a simulation]
:[https://buffalo.box.com/v/We-are-living-in-a-simulation Scott Adams: We are living in a simulation]
Line 153: Line 191:
:[https://buffalo.box.com/v/AI-in-the-Future The Future of Artificial Intelligence]
:[https://buffalo.box.com/v/AI-in-the-Future The Future of Artificial Intelligence]


==Monday, September 22 (4:00-5:50pm) Can a machine be conscious?==
Machine consciousness: Machines cannot have intentionality; they cannot have experiences which are ''about'' something.   
 
Machines cannot have intentionality; they cannot have experiences which are ''about'' something.   


Background
Background
Line 164: Line 200:
:[https://arxiv.org/pdf/1901.02918 Making AI Meaningful Again]
:[https://arxiv.org/pdf/1901.02918 Making AI Meaningful Again]
:[https://aclanthology.org/2025.acl-long.1258.pdf Søgaard: Do Language Models Have Semantics?]
:[https://aclanthology.org/2025.acl-long.1258.pdf Søgaard: Do Language Models Have Semantics?]
:[https://arxiv.org/pdf/2511.16582 Consciousness in Artificial Intelligence? A Framework for Classifying Objections and Constraints]
==Monday, September 29 (4:00-5:50pm) AGI, Behavior Settings and Distributed Cognition==
:[https://buffalo.box.com/v/Lecture-5-Niches Video]
:[https://buffalo.box.com/v/Lecture-5-Slides Slides]
'''Part 1. Question-and-answer session with Jérémy Ravenel of [https://home.naas.ai naas.ai]'''
Questions to be addressed include:
:What are you doing with BFO and LLMs?
:Can you rely on BFO still being operative in the proper way even after a new release of an LLM?
See also: [https://www.linkedin.com/posts/jeremyravenel_why-is-bfo-so-powerful-bfo-basic-formal-activity-7250607560976732163-d7tZ/ Why is BFO so powerful?]
'''Part 2. Niches and Intelligence'''
:Knowing how vs Knowing that
:Personal knowledge and science
:Creativity
:Empathy
:Entrepreneurship
:Leadership and control (and ruling the world)
Background
:[https://buffalo.box.com/v/AI-and-Creativity Explicit, implicit, practical, personal and tacit knowledge]
:[https://buffalo.box.com/v/Practical-knowledge Personal knowledge]
==Monday, October 6 (4:00-5:50pm) Towards a theory of intelligence ==
:[https://buffalo.box.com/v/ontology-and-AI-slides-6 Slides]
:[https://buffalo.box.com/v/Lecture6-AI-Ontology Video]
'''Part 1. Definitions of intelligence
:A. the ability to adapt to new situations (applies both to humans and to animals) 
:B. a very general mental capability (possessed only by humans) that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience 
Can a machine be intelligent in either of these senses?
Can a team be intelligent?
See Ryan Muldoon, "Diversity and the Division of Cognitive Labor", ''Philosophy Compass'' 8 (2):117-125 (2013)
Can a team made of humans and AI systems be intelligent?
See M. Stelmaszak et al., "Artificial Intelligence as an Organizing Capability Arising from Human-Algorithm Relations", ''Journal of Management Studies'', https://doi.org/10.1111/joms.70003
'''Part 2. What do IQ tests measure?'''
:[https://buffalo.box.com/v/What-do-IQ-tests-2022 Slides on IQ tests]
:[https://www.youtube.com/watch?v=BcyeAbcDDgg Human and animal intelligence]
'''Readings:'''
:Linda S. Gottfredson. [https://www1.udel.edu/educ/gottfredson/reprints/1994WSJmainstream.pdf Mainstream Science on Intelligence]. In: ''Intelligence'' 24 (1997), pp. 13–23.
:Jobst Landgrebe and Barry Smith: [https://arxiv.org/pdf/1906.05833.pdf There is no Artificial General Intelligence]
''The context-dependence of human intelligence, and why AGI is impossible''
'''Part 3. Affordances, tacit knowledge, cognitive niches, and the background of Artificial Intelligence'''
'''Background''':
:Harry Heft, ''[https://buffalo.box.com/shared/static/bbaq21q115pi8xpa5744ku1ftuuj6je0.pdf Ecological Psychology in Context]''
:[https://youtu.be/lS4-QSR1sNk?t=791 There's no 'I' in 'AI'], Steven Pemberton, Amsterdam, December 12, 2024
::1. Esatz definitions: using words like 'thinks' as in 'the machine is thinking', but with meanings quite different from those we use when talking about human beings. As when we define 'flying' as moving through the air, and then jumping up and down and saying "look, I'm flying!"
::2. Pareidolia: a psychological phenomenon that causes people to see patterns, objects, or meaning in ambiguous or unrelated stimuli
::3. If you can't spot irony, you're not intelligent
==Monday October 13 NO CLASS: FALL BREAK==
==Monday October 20 (4:00-5:50pm) The Free Will Problem and the Problem of the Machine Will==


==Monday, September 29 (4:00-5:50pm) The machine will==
:[https://buffalo.box.com/v/Lecture7-video Video]
:[https://buffalo.box.com/v/Lecture7-slides Slides]


Computers cannot have a will, because computers ''don't give a damn''. Therefore there can be no machine ethics
Computers cannot have a will, because computers ''don't give a damn''. Therefore there can be no machine ethics
Line 176: Line 288:
:There can be no AI ethics (only: ethics governing human beings when they use AI)
:There can be no AI ethics (only: ethics governing human beings when they use AI)


==Monday, October 6 (4:00-5:50pm) Use of ontologies to support Large Language Models ==
What is the basis of ethics as applied to humans?
 
:[https://philpapers.org/rec/TALFAI-2 Raymond Tallis: ''Freedom: An Impossible Reality]
 
:[https://buffalo.box.com/v/Landgrebe-Ethics Slides]
:[https://youtu.be/EiBBS8ueyz4 Video]
 
:Utilitarianism
:Value ethics
:Responsiblity
 
No responsibility without objectifying intelligence
 
On what basis should we build an AI ethics?
 
On why AI ethics is (a) impossible, (b) unnecessary
 
Readings:
:Moor: [https://philosophynow.org/issues/72/Four_Kinds_of_Ethical_Robots Four kinds of ethical robots]
:Jobst Landgrebe and Barry Smith: No AI Ethics
:Crane: [https://iai.tv/articles/the-ai-ethics-hoax-auid-1762?_auid=2020 The AI Ethics Hoax]
 
==Monday October 27 (4:00-5:50pm) The Ontology of Consciousness==
 
[https://buffalo.box.com/v/Lecture8-slides Slides]
[https://buffalo.box.com/v/Lecture8-video Video]
 
:Learning outcomes
 
:John Searle
::On consciousness: the Chinese Room Argument
::Searle and Smith
 
:Neuroscience and consciousness
 
:[https://www.ted.com/talks/anil_seth_being_you_a_new_science_of_consciousness Anil Seth, ''Being You: A New Science of Consciousness'']
 
:[https://link.springer.com/article/10.1007/s11229-019-02192-y Making AI meaningful again]
 
:[https://philpapers.org/rec/TALWTM Raymond Tallis, ''Why the Mind is not a Computer'']
 
:[https://philpapers.org/rec/TALTEA Raymond Tallis: ''The Explicit Animal: A Defence of Human Consciousness'']
 
==Monday November 3 (4:00-5:50pm) Debates on ontology engineering: Part 1==
 
Featuring [https://johnbeverley.com/ John Beverley]
 
:[https://buffalo.box.com/v/Lecture9-video Video]
:[https://buffalo.box.com/v/Lecture9-slides Slides]
:[https://buffalo.box.com/v/Transcription-Debate-1 Transcription]


Background
Debating the following motions:
:Philosophy is irrelevant to ontology engineering
::[https://buffalo.box.com/v/use-mention-confusion The use-mention confusion]
:Mappings merely give extra life to bad ontologies
:AI fear is justified
:BFO is too slow to react
:Knowledge graphs cannot prevent hallucinations
:There can never be AGI
 
'''Background'''
:Strategies for leveraging ontologies and knowledge graphs to enhance the capabilities of Large Language Models and address their limitations.


Adapted from ''[https://www.linkedin.com/pulse/ontological-foundation-cornerstone-trustworthy-ai-shawn-riley-l3igc/ The Ontological Foundation: A Cornerstone for Trustworthy AI]'', October 2024, with caveats added in '''bold face''.
''[https://www.linkedin.com/pulse/ontological-foundation-cornerstone-trustworthy-ai-shawn-riley-l3igc/ The Ontological Foundation: A Cornerstone for Trustworthy AI] with caveats added in '''bold face'''


*Explainability: Ontologies make AI decision-making processes more transparent and interpretable. By providing a clear, logical structure of knowledge, they allow for tracing the reasoning behind '''some''' AI decisions.
*Explainability: Ontologies make AI decision-making processes more transparent and interpretable. By providing a clear, logical structure of knowledge, they allow for tracing the reasoning behind '''some''' AI decisions.
Line 192: Line 363:
*Domain Expertise Encoding: They provide a means to formally encode human expert knowledge, '''to some extent''' bridging the gap between human understanding and machine processing.
*Domain Expertise Encoding: They provide a means to formally encode human expert knowledge, '''to some extent''' bridging the gap between human understanding and machine processing.


==Monday October 13 NO CLASS: FALL BREAK==
==Monday October 20 (4:00-5:50pm)==


'''[https://buffalo.box.com/v/AI-and-the-Future An introduction to the statistical foundations of AI]'''
'''[https://buffalo.box.com/v/AI-and-the-Future An introduction to the statistical foundations of AI]'''
Line 213: Line 381:
:Background reading: ''Why machines will never rule the world'', 1e chapter 8, 2e chapter 9
:Background reading: ''Why machines will never rule the world'', 1e chapter 8, 2e chapter 9


==Monday October 27 (4:00-5:50pm) AI and World Models / AI and the Replication Problem==
==Monday November 10 (4:00-5:50pm) Debates on ontology engineering: Part 2==


::Affordances and the background of Artificial Intelligence
:[https://youtu.be/UzHTEMxgKEc Video]
::[https://arxiv.org/pdf/1901.02918.pdf Making AI Meaningful Again]
:[https://buffalo.box.com/v/Lecture10Slides Slides]
:[https://buffalo.box.com/v/Debate2-Transcription Transcription]


Complex Systems and Cognitive Science: Why the Replication Problem is here to stay
:Will combining the semantically rich architectures provided by ontologies and knowledge graphs with the generative strengths of LLMs provide a path towards more explainable artificial intelligence systems, more trustworthy output, and a deeper understanding of vulnerabilities arising from integrated architectures?
:The idea of digital immortality is idiotic
:We should allow AI research to proceed unregulated
:Even if you think AGI is impossible, you should treat robots at certain levels of sophistication as moral agents
:'OWL semantics' have nothing to do with the semantics of ordinary language
:AI will take away our jobs
:There will never be driverless cars
:Science is not ready for software, let alone AI


The 'replication problem' is the the inability of scientific communities to independently confirm the results of scientific work. Much has been written on this problem especially as it arises in (social) psychology, and on potential solutions under the heading of 'open science'. But we will see that the replication problem has plagued medicine as a positive science since its beginnings (Virchov and Pasteur). This problem has become worse over the last 30 years and has massive consequences for healthcare practice and policy.
Outlines the current landscape of ontology-based AI enhancement strategies, highlighting what goes well and what goes poorly, and why ontology engineering is necessary.


Background
Background
:[https://quantumzeitgeist.com/haghighi-stanford-demonstrates-ontological-bias-in-chatgpt-image-generation-via-root-depiction/ Ontological Assumptions in AI Outputs]
==November 10 is the deadline for submission to BS of starting drafts for your essays==
PhD candidates:
:2 credit hours: 2000 words / starting draft: 1000 words
:3 credit hours: 2000 + 3000 words / 1000 + 1000 words
Masters candidates:
:2 credit hours: 1500 words /starting draft: 750 words
:3 credit hours: 1500 + 2000 words / 750 + 750 words
Undergraduate candidates
:2 credit hours: 1000 words / starting draft: 500 words
:3 credit hours: 1500 words / 500 + 500 words
==Wednesday, November 19 (10:00 - 11:50am) On Hallucinations and Political Correctness ==
This will be a lecture by [https://x.com/JobstLandgrebe?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Eauthor Jobst Landgrebe] on:
:'''Why machines will never stop hallucinating'''
[https://buffalo.box.com/v/Lecture11-Slides Slides]
[https://buffalo.box.com/v/Lecture11-Video Video]
In current-day culture, concerns are raised when LLMs responds with symbol or pixel sequences which are seen as deviating from social norms of political correctness or wokeness -- or in other words, when they say the unsayable. Further problems are riased for LLM technology by the inconvenient fact of hallucinations, since this prevents their usage for task automation. LLM architects and engineers try to prevent both types of events. This talk shows why it is impossible to ensure that LLMs do not hallucinate or speak the unspeakable, drawing on arguments from the theory of computation (Turing decision/Rice theorem, Gödel's First Incompleteness Theorem).  
Literature:
[https://arxiv.org/abs/2307.10719 Glukhov et. al 2023], LLM Censorship: A Machine Learning Challenge or a Computer Security Problem?
[https://arxiv.org/abs/2409.05746 Banerjee et al. 2024], LLMs Will Always Hallucinate, and We Need to Live With This
[https://ml-site.cdn-apple.com/papers/the-illusion-of-thinking.pdf Apple, The Illusion of Thinking]
==Monday November 24 (4:00-5:50pm) Landgrebe on the Replication Crisis. Jacko on the Ontological Foundations of Proxemics==
'''Jobst Landgrebe: Complex Systems and Cognitive Science: Why the Replication Problem is here to stay'''
[https://buffalo.box.com/v/Lecture12-Landgrebe-Video Video]
[https://buffalo.box.com/v/Lecture12-Landgrebe-Slides Slides]
:The 'replication problem' is the the inability of scientific communities to independently confirm the results of scientific work. Much has been written on this problem especially as it arises in (social) psychology, and on potential solutions under the heading of 'open science'. But we will see that the replication problem has plagued medicine as a positive science since its beginnings (Virchov and Pasteur). This problem has become worse over the last 30 years and has massive consequences for healthcare practice and policy.
'''Jan Jacko: Ontological Foundations of Proxemics'''
[https://buffalo.box.com/v/Lectuer12-Jacko-Video Video]
[https://buffalo.box.com/v/Lecture12-Jacko-Slides Slides]
:Proxemics is the study of spatial behaviour in interpersonal communication. It rests on a set of implicit and explicit assumptions about the nature of space, embodiment, intentionality, and meaning. This presentation aims to articulate these assumptions and outline a conceptual framework for understanding proxemics as an ontologically grounded discipline.
--
Background on the replication crisis:
:[https://plato.stanford.edu/entries/scientific-reproducibility Reproducibility of Scientific Results], ''Stanford Encyclopedia of Philosophy'', 2018
:[https://www.vox.com/future-perfect/21504366/science-replication-crisis-peer-review-statistics Science has been in a “replication crisis” for a decade]
:[https://www.youtube.com/watch?v=HhDGkbw1FdwThe Irreproducibility Crisis and the Lehman Crash], Barry Smith, Youtube 2020
:[https://buffalo.box.com/s/m3nu15lqjw0qhpqycz3wjsai057p9jf6 Slides]
:[https://buffalo.app.box.com/s/m3nu15lqjw0qhpqycz3wjsai057p9jf6 The replication problems which arise when AI applied in scientific research]
:[https://buffalo.app.box.com/s/m3nu15lqjw0qhpqycz3wjsai057p9jf6 The replication problems which arise when AI applied in scientific research]
:[https://buffalo.box.com/v/Is-Psychology-Finished? Is Psychology Finished?]
:[https://buffalo.box.com/v/Is-Psychology-Finished? Is Psychology Finished?]
Line 228: Line 466:
:[https://www.vox.com/future-perfect/21504366/science-replication-crisis-peer-review-statistics Science has been in a “replication crisis” for a decade]
:[https://www.vox.com/future-perfect/21504366/science-replication-crisis-peer-review-statistics Science has been in a “replication crisis” for a decade]
:[https://www.youtube.com/watch?v=HhDGkbw1FdwThe Irreproducibility Crisis and the Lehman Crash], Barry Smith, Youtube 2020
:[https://www.youtube.com/watch?v=HhDGkbw1FdwThe Irreproducibility Crisis and the Lehman Crash], Barry Smith, Youtube 2020
:[https://x.com/cremieuxrecueil/status/1983994242272993592 Bayer tested some findings and only achieved a 21% replication rate for biomedical studies]


==Monday November 3 (4:00-5:50pm) Ontology-based AI Enhancement Strategies: Part 1==
''[https://www.linkedin.com/pulse/ontological-foundation-cornerstone-trustworthy-ai-shawn-riley-l3igc/ The Ontological Foundation: A Cornerstone for Trustworthy AI]'', October 2024, with caveats added in '''bold face''.


Featuring [https://johnbeverley.com/ John Beverley]
*Explainability: Ontologies make AI decision-making processes more transparent and interpretable. By providing a clear, logical structure of knowledge, they allow for tracing the reasoning behind '''some''' AI decisions.


Combining the semantically rich architectures provided by ontologies and knowledge graphs with the generative strengths of LLMs promises to provide a path towards more explainable artificial intelligence systems, more trustworthy output, and a deeper understanding of vulnerabilities arising from integrated architectures
*Consistency: They '''help to foster''' logical consistency across AI systems, reducing errors and contradictions. This is particularly crucial in complex domains where maintaining coherence is challenging.


Introduction
*Interoperability: Ontologies '''help to foster''' seamless integration of knowledge from various sources and domains. This interoperability is essential for creating comprehensive AI systems that can reason across multiple areas of expertise.


:Ontologies and Knowledge Graphs
*Semantic Richness: Ontologies capture nuanced relationships and constraints that go beyond simple hierarchical structures, allowing for more sophisticated reasoning.


:Strategies for leveraging ontologies and knowledge graphs to enhance the capabilities of Large Language Models and address their limitations.
*Domain Expertise Encoding: They provide a means to formally encode human expert knowledge, '''to some extent''' bridging the gap between human understanding and machine processing.


==Monday November 10 (4:00-5:50pm) Ontology-based AI Enhancement Strategies: Part 2==
''Complex Systems and Cognitive Science: Why the Replication Problem is here to stay''


Outlines the current landscape of ontology-based AI enhancement strategies, highlighting what goes well and what goes poorly, and why ontology engineering is necessary.
:The 'replication problem' is the the inability of scientific communities to independently confirm the results of scientific work. Much has been written on this problem especially as it arises in (social) psychology, and on potential solutions under the heading of 'open science'. But we will see that the replication problem has plagued medicine as a positive science since its beginnings (Virchov and Pasteur). This problem has become worse over the last 30 years and has massive consequences for healthcare practice and policy.  


Background
[https://buffalo.box.com/s/m3nu15lqjw0qhpqycz3wjsai057p9jf6 Slides]
:[https://quantumzeitgeist.com/haghighi-stanford-demonstrates-ontological-bias-in-chatgpt-image-generation-via-root-depiction/ Ontological Assumptions in AI Outputs]


=='''WILL BE RESCHEDULED FOR THE MORNING OF Nov 20 or 21 (OR TO SOME DAY IN THE PRECEDING WEEK)''' On hallucinations and political correctness==
Background:


This will be a lecture by Jobst Landgrebe on two interrelated topics of
:[https://plato.stanford.edu/entries/scientific-reproducibility Reproducibility of Scientific Results], ''Stanford Encyclopedia of Philosophy'', 2018
:[https://www.vox.com/future-perfect/21504366/science-replication-crisis-peer-review-statistics Science has been in a “replication crisis” for a decade]
:[https://www.youtube.com/watch?v=HhDGkbw1FdwThe Irreproducibility Crisis and the Lehman Crash], Barry Smith, Youtube 2020


:'''1. Why machines will never stop hallucinating'''
==Monday December 1 (4:00-5:50pm) Landgrebe on machine intelligence. Jacko on psychopathic AI==


:'''2. Why it is impossible to protect machines from making politically incorrect responses'''
'''Jobst Landgrebe: Why we cannot create intelligence inside a machine'''


==Monday November 24 (4:00-5:50pm) TBD==
'''Timothy W. Coleman: Beyond the Limits of AI: Ontology as a Framework for Good System Design (Student presentation)'''


==Monday December 1 (4:00-5:50pm) Personal knowledge==
'''Michael Behun III: The Paradox within Artificial Intelligence Development'''


:Knowing how vs Knowing that
'''Jan Jacko: Are intelligent machines psychopathic by design?'''
:Personal knowledge and science
:Creativity
:Empathy
:Entrepreneurship
:Leadership and control (and ruling the world)


Background
:There are two major paradigms in clinical psychology. The first treats mental and personality disorders as disturbances of an inner life: of subjective experience, affect, and self-awareness. This view cannot be meaningfully applied to artificial systems, for which no such subjectivity is given. The second paradigm is behavioural and functional. Here disorders, especially personality disorders, are defined as stable, recurrent patterns of behaviour, cognition, and interpersonal functioning that deviate from expected norms and impair adaptation. Psychopathy in this framework is a cluster of observable traits: persistent violation of social rules, instrumental treatment of others, chronically shallow or incongruent emotional expression, irresponsibility, and a striking absence of anxiety or inhibition in situations that normally elicit it. In this talk I adopt the second, behavioural paradigm and extend it to artificial systems, introducingthe notion of '''AI quasi-personality'''.
:[https://buffalo.box.com/v/AI-and-Creativity Explicit, implicit, practical, personal and tacit knowledge]
:[https://buffalo.box.com/v/Practical-knowledge Personal knowledge]


==Monday December 8 (4:00-5:50pm) Oral presentations (Compulsory for all students)==
==Monday December 8 (4:00-5:50pm) Oral presentations (Compulsory for all students)==
4:00  John Davis: Symbiotic Surveillance and Artificial Intelligence
4:15
4:30  Cristian Keroles: Scientific Realism, Paradigm Shifts, and the Feasibility of AGI
4:45  Mike Behun Jr.: Examining the Role of Formal Ontology and Hybrid AI in Achieving Trustworthy Results, Based
on Domain Experts for High Stakes Systems.
5:00  Ore Afe:
5:15  Gregory DeFranco: Will Algorithms Control Us?
5:30  Claire Allen: Video Games and the Virtual World


==Background Material==
==Background Material==
Line 288: Line 536:


[https://ncorwiki.buffalo.edu/index.php/Interviews_and_podcasts_on_%27%27Why_Machines_Will_Never_Rule_the_World%27%27 Podcasts and interviews on ''Why Machines Will Never Rule the World'']
[https://ncorwiki.buffalo.edu/index.php/Interviews_and_podcasts_on_%27%27Why_Machines_Will_Never_Rule_the_World%27%27 Podcasts and interviews on ''Why Machines Will Never Rule the World'']
==Student Learning Outcomes==
1. Comprehend the Architecture and Operation of Large Language Models: Explain the basic design and functioning of Large Language Models (LLMs) such as ChatGPT. Define and use correctly key terms
2. Evaluate the Theoretical and Practical Limits of AI: Explain the limitations of AI systems as applications of Turing-computable mathematics. Critically assess claims about Artificial General Intelligence (AGI) and the “singularity.”
3. Examine Theories of Machine Consciousness, Transhumanism, and Simulation: Explain why machines lack intentionality and subjective experience.
4. Understand Ethical and Normative Dimensions of AI: Explain why AI systems cannot possess will, intention, or moral responsibility, and differentiate between AI ethics and ethics of AI use.
5. Apply Ontology-Based Strategies for AI Enhancement: Explain how ontologies and knowledge graphs can improve the explainability, consistency, and interoperability of AI systems. Identify strengths and weaknesses of ontology-based and neurosymbolic AI approaches.

Latest revision as of 13:21, 2 December 2025

Department of Philosophy, University at Buffalo Ontology and AI

Fall 2025 - PHI609SEM-SMI2 - Special Topics: Ontology and Artificial Intelligence - Class Number 24371

Faculty: Barry Smith

Hybrid

in person: Monday 4-5:50pm, 141 Park Hall
remote synchronous: Monday 4-5:50pm; dial-in details will be supplied by emaail
remote asynchronous: dial-in details will be supplied by email; must attend synchronously (either online or in person) on December 8

Grading All enrolled students must submit to BS a Starting Draft version of their essay by November 10 at the latest. They must must a full version of their essay and of the associated powerpoint deck by December 8.

Word length requirements are as follows:

PhD candidates:
2 credit hours: 2000 words / starting draft: 1000 words
3 credit hours: 2000 + 3000 words / starting draft: 1000 + 1000 words
Masters candidates:
2 credit hours: 1500 words /starting draft: 750 words
3 credit hours: 1500 + 2000 words / starting draft: 750 + 750 words
Undergraduate candidates
2 credit hours: 1000 words / starting draft: 500 words
3 credit hours: 1500 words / 500 + 500 words

3 credit hour candidates may submit a single essay provided it its length conforms to the combined limits listed above.

Starting draft should be your own work. No use of LLMs. All candidates are, however, welcome to use ChatGPT on polishing their starting drafts, providing that they follow the rules set forth here:

Grading for 2 Credit Hours Course (PhD candidates)

Essay (at least 2000 words): 40%
Presentation (and accompanying powerpoint deck) on December 8: 40%
Class Participation (for in person and remote synchronous students) 20%
Oral exam (for remote asynchronous students) 20%

Essays may include software code and internet portal or database content where relevant.

Students taking this course for 3 credit hours will be required to prepare an additional essay of 3000 words, together with class presentation and powerpoint deck. The total contribution for these two essays is 40%.

Policy on use of AI

There are two options:

Option 1: Include a declaration on p. 1 to the effect that the essay was written entirely without any sort of AI assistance. I reserve the right to use software tools, but also my own judgment, to ensure this draft was written by you. Grades under option 1. will be determined by the quality of your essay.

Option 2 is in three steps:

Step 1. Create a draft in your own words of an essay that is about half as long as your target length length. This should be a substantive draft, but it can contain for example rough notes pointing to further lines of development. Not only this initial draft, but also all further steps in the list below, should rely on study by you of the relevant literature. Both your draft and your final essay should accordingly contain lists of references.
Step 2. Submit this draft to me at phismith@buffalo.edu by the middle of the semester.
Step 3. You create a new prompt using your draft as an attachment with an instruction such as: show me how I can improve the attached. This will start a potentially long process of improvements in your essay incorporating further contributions from you together with assistance from the LLM. You should attempt to use prompts to manipulate the style of the LLM output in a direction of a style appropriate to serious academic research, with references, quotations, definitions, as needed. Most importantly: you should be aware that LLMs often make errors (called 'hallucinations'), for example inventing references in the literature which do not in fact exist.
Step 4. the LLM has been keeping track of everything you tell it to do since you started the newchat. When you think you might be ready to submit, use the LLM save function to generate a URI linking to all the interactions thus far – effectively a log of your process. This log, together with your initial and final essay will for part of what will be evaluated for your grade.
Step 5. When you truly are ready to submit, press save for one last time and take a note of the link; send me this link, together with your completed essay, and with any notes on features of the log you which to point out -- for example requests that I ignore specific chains of prompts because they proved to be dead ends.

Grades under Option 2 will be determined on the basis of (a) originality of the initial draft, (b) creativity of your prompts, (c) quality of final essay.

Attendance at the synchronous session on December 8, featuring student presentations, is compulsory for all students

Introduction

Ontology (also called 'metaphysics') is a subfield of philosophy which aims to establich the kinds of entities in the world -- including both the material and the mental world -- and the relations between them. Applied ontology applies philosophical ideas and methods to support those who are collecting, using, comparing, refining, evaluating or (today above all) generating data.

Artificial Intelligence (AI) is the subfield of Computer Science devoted to developing programs that enable computers to display behavior that can (broadly) be characterized as 'intelligent'. On the strong version, the ultimate goal of AI is to create what is called General Artificial Intelligence (AGI), by which is meant an artificial system that is as intelligent as a human being. ChatGT and other large language models (LLMs) attempt to generate data from other data, where the latter are obtained for example by crawling the internet.

Required reading

Why Machines Will Never Rule the World: Artificial Intelligence without Fear (Routledge 2022; revised and enlarged 2nd edition published in 2025).

See also offer here

Draft Schedule

Monday, August 25 (4:00-5:50pm) The Glory and the Misery of Large Language Models

We will provide a brief introduction to Large Language Models such as ChatGPT. Focusing not only on positive but also on negative aspects of how they work.

Video1
Slides1
Transcript

GPT-5 and the French and Indian War:

Video2
Slides2
Summary of the argument of Why Machines Will Never Rule the World
What does 'stochastic' mean in 'stochastic AI'
What is 'scaling'
What are hallucinations?
Teach yourself history with ChatGPT

Monday, September 1 NO CLASS: LABOR DAY

Monday, September 8 (4:00-5:50pm) Ontology and the History of AI

Part 1: From Good Old Fashioned (Logical, Symbolic) AI to ChatGPT

Since its inception in the last century, AI has enjoyed repeated cycles of enthusiasm and disappointment (AI summers and winters). Recent successes of ChatGPT and other Large Language Models (LLMs) have opened a new era of popularization of AI. For the first time, AI tools have been created that are immediately available to the wider population, who for the first time can have real hands-on experience of what AI can do.

In this first lecture we will address the origins of AI in Stanford University in the 1970s and '80s, and specifically in the work on common-sense ontology of Patrick Hayes and others.

Topics to be deal with include:

What is ontology?
From Aristotle to 20th century philosophical ontology
Patrick Hayes, Naive Physics and ontology-based robotics
Doug Lennat and the CYC (for 'enCYClopedia' project)
Why CYC failed
Why ontology is still important to AI

Background:

History of AI
Where do ontologies come from?
See also references to Hayes in Everything must go

Monday, September 15 (4:00-5:50pm) Limits of AI?

Slides
Video

1. Surveys the technical fundamentals of AI: Methods, mathematics, usage

2. Natural and engineered systems

3. The ontology of systems

4. Complex systems

5. The limits of Turing machines

6. Why AI cannot model complex systems adequately and synoptically, and why they therefore cannot reach a level of intelligence equal to that of human beings.

Conclusions:

AI is a family of algorithms to automate repetitive events
Deep neural networks have nothing to do with neurons
AI is not artificial 'intelligence'; it is a branch of mathematics in which the attempt is made to use the Turing machine to its limits by using gigantically large amounts of data

Background reading:

Marcus on superintelligence
https://www.wheresyoured.at/
https://x.com/jobstlandgrebe?lang=en
https://ontology.buffalo.edu/smith/

Monday, September 22 (4:30 - 16:15) Machine Consciounsess, Transhumanism, and Ecological Psychology

1. Jobst Landgrebe on mathematical definitions of consciousness

2. Surveys the spectrum of transhumanism

3. Debunks the feasibility of radically improving human beings via technology.

4. Explains why Sam Altman and other AI gods are so passionate about creating Artificial General Intelligence

5. J. J. Gibson, direct realism, and how our behavior is tuned to affordances

Video
Slides

Background:

TESCREALISM
Transhumanism and the Mind-Body Problem

AI and the meaning of life:

AI and The Matrix
There is no general AI
Landgrebe on Transhumanism
Considering the existential risk of Artificial Superintelligence
Scott Adams: We are living in a simulation

Ontology of the Eruv (why it would take all the fun out of real estate if everyone could live next door to John Lennon)

Are we living in a simulation?

David Chalmers' Reality+
Scott Adams: We are living in a simulation
AI and The Matrix
Slides
Are we living in a simulation?
On Chalmers on Reality+?
The Future of Artificial Intelligence

Machine consciousness: Machines cannot have intentionality; they cannot have experiences which are about something.

Background

Slides
Video
Searle's Chinese Room Argument
Searle: Minds, Brains, and Programs
Making AI Meaningful Again
Søgaard: Do Language Models Have Semantics?
Consciousness in Artificial Intelligence? A Framework for Classifying Objections and Constraints

Monday, September 29 (4:00-5:50pm) AGI, Behavior Settings and Distributed Cognition

Video
Slides

Part 1. Question-and-answer session with Jérémy Ravenel of naas.ai

Questions to be addressed include:

What are you doing with BFO and LLMs?
Can you rely on BFO still being operative in the proper way even after a new release of an LLM?

See also: Why is BFO so powerful?

Part 2. Niches and Intelligence

Knowing how vs Knowing that
Personal knowledge and science
Creativity
Empathy
Entrepreneurship
Leadership and control (and ruling the world)

Background

Explicit, implicit, practical, personal and tacit knowledge
Personal knowledge

Monday, October 6 (4:00-5:50pm) Towards a theory of intelligence

Slides
Video

Part 1. Definitions of intelligence

A. the ability to adapt to new situations (applies both to humans and to animals) 
B. a very general mental capability (possessed only by humans) that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience 

Can a machine be intelligent in either of these senses?

Can a team be intelligent?

See Ryan Muldoon, "Diversity and the Division of Cognitive Labor", Philosophy Compass 8 (2):117-125 (2013)

Can a team made of humans and AI systems be intelligent?

See M. Stelmaszak et al., "Artificial Intelligence as an Organizing Capability Arising from Human-Algorithm Relations", Journal of Management Studies, https://doi.org/10.1111/joms.70003

Part 2. What do IQ tests measure?

Slides on IQ tests
Human and animal intelligence

Readings:

Linda S. Gottfredson. Mainstream Science on Intelligence. In: Intelligence 24 (1997), pp. 13–23.
Jobst Landgrebe and Barry Smith: There is no Artificial General Intelligence

The context-dependence of human intelligence, and why AGI is impossible

Part 3. Affordances, tacit knowledge, cognitive niches, and the background of Artificial Intelligence

Background:

Harry Heft, Ecological Psychology in Context
There's no 'I' in 'AI', Steven Pemberton, Amsterdam, December 12, 2024
1. Esatz definitions: using words like 'thinks' as in 'the machine is thinking', but with meanings quite different from those we use when talking about human beings. As when we define 'flying' as moving through the air, and then jumping up and down and saying "look, I'm flying!"
2. Pareidolia: a psychological phenomenon that causes people to see patterns, objects, or meaning in ambiguous or unrelated stimuli
3. If you can't spot irony, you're not intelligent

Monday October 13 NO CLASS: FALL BREAK

Monday October 20 (4:00-5:50pm) The Free Will Problem and the Problem of the Machine Will

Video
Slides

Computers cannot have a will, because computers don't give a damn. Therefore there can be no machine ethics

The lack of the giving-a-damn-factor is taken by Yann LeCun as a reason to reject the idea that AI might pose an existential risk to humanity – an AI will have no desire for self-preservation “Almost half of CEOs fear A.I. could destroy humanity five to 10 years from now — but ‘A.I. godfather' says an existential threat is ‘preposterously ridiculous’” Fortune, June 15, 2023. See also here.

Implications of the absence of a machine will:

The problem of the singularity (when machines will take over from humans) will not arise
The idea of digital immortality will never be realized Slides
There can be no AI ethics (only: ethics governing human beings when they use AI)

What is the basis of ethics as applied to humans?

Raymond Tallis: Freedom: An Impossible Reality
Slides
Video
Utilitarianism
Value ethics
Responsiblity

No responsibility without objectifying intelligence

On what basis should we build an AI ethics?

On why AI ethics is (a) impossible, (b) unnecessary

Readings:

Moor: Four kinds of ethical robots
Jobst Landgrebe and Barry Smith: No AI Ethics
Crane: The AI Ethics Hoax

Monday October 27 (4:00-5:50pm) The Ontology of Consciousness

Slides Video

Learning outcomes
John Searle
On consciousness: the Chinese Room Argument
Searle and Smith
Neuroscience and consciousness
Anil Seth, Being You: A New Science of Consciousness
Making AI meaningful again
Raymond Tallis, Why the Mind is not a Computer
Raymond Tallis: The Explicit Animal: A Defence of Human Consciousness

Monday November 3 (4:00-5:50pm) Debates on ontology engineering: Part 1

Featuring John Beverley

Video
Slides
Transcription

Debating the following motions:

Philosophy is irrelevant to ontology engineering
The use-mention confusion
Mappings merely give extra life to bad ontologies
AI fear is justified
BFO is too slow to react
Knowledge graphs cannot prevent hallucinations
There can never be AGI

Background

Strategies for leveraging ontologies and knowledge graphs to enhance the capabilities of Large Language Models and address their limitations.

The Ontological Foundation: A Cornerstone for Trustworthy AI with caveats added in bold face

  • Explainability: Ontologies make AI decision-making processes more transparent and interpretable. By providing a clear, logical structure of knowledge, they allow for tracing the reasoning behind some AI decisions.
  • Consistency: They help to foster logical consistency across AI systems, reducing errors and contradictions. This is particularly crucial in complex domains where maintaining coherence is challenging.
  • Interoperability: Ontologies help to foster seamless integration of knowledge from various sources and domains. This interoperability is essential for creating comprehensive AI systems that can reason across multiple areas of expertise.
  • Semantic Richness: Ontologies capture nuanced relationships and constraints that go beyond simple hierarchical structures, allowing for more sophisticated reasoning.
  • Domain Expertise Encoding: They provide a means to formally encode human expert knowledge, to some extent bridging the gap between human understanding and machine processing.


An introduction to the statistical foundations of AI

Video

The types of AI

Deterministic AI
Good old fashioned AI (GOFAI)
Basic stochastic AI
How regression works
Advanced stochastic AI
Neural networks and deep learning
Hybrid
Neurosymbolic AI
Background reading: Why machines will never rule the world, 1e chapter 8, 2e chapter 9

Monday November 10 (4:00-5:50pm) Debates on ontology engineering: Part 2

Video
Slides
Transcription
Will combining the semantically rich architectures provided by ontologies and knowledge graphs with the generative strengths of LLMs provide a path towards more explainable artificial intelligence systems, more trustworthy output, and a deeper understanding of vulnerabilities arising from integrated architectures?
The idea of digital immortality is idiotic
We should allow AI research to proceed unregulated
Even if you think AGI is impossible, you should treat robots at certain levels of sophistication as moral agents
'OWL semantics' have nothing to do with the semantics of ordinary language
AI will take away our jobs
There will never be driverless cars
Science is not ready for software, let alone AI

Outlines the current landscape of ontology-based AI enhancement strategies, highlighting what goes well and what goes poorly, and why ontology engineering is necessary.

Background

Ontological Assumptions in AI Outputs

November 10 is the deadline for submission to BS of starting drafts for your essays

PhD candidates:

2 credit hours: 2000 words / starting draft: 1000 words
3 credit hours: 2000 + 3000 words / 1000 + 1000 words

Masters candidates:

2 credit hours: 1500 words /starting draft: 750 words
3 credit hours: 1500 + 2000 words / 750 + 750 words

Undergraduate candidates

2 credit hours: 1000 words / starting draft: 500 words
3 credit hours: 1500 words / 500 + 500 words

Wednesday, November 19 (10:00 - 11:50am) On Hallucinations and Political Correctness

This will be a lecture by Jobst Landgrebe on:

Why machines will never stop hallucinating

Slides

Video

In current-day culture, concerns are raised when LLMs responds with symbol or pixel sequences which are seen as deviating from social norms of political correctness or wokeness -- or in other words, when they say the unsayable. Further problems are riased for LLM technology by the inconvenient fact of hallucinations, since this prevents their usage for task automation. LLM architects and engineers try to prevent both types of events. This talk shows why it is impossible to ensure that LLMs do not hallucinate or speak the unspeakable, drawing on arguments from the theory of computation (Turing decision/Rice theorem, Gödel's First Incompleteness Theorem).  

Literature:

Glukhov et. al 2023, LLM Censorship: A Machine Learning Challenge or a Computer Security Problem?

Banerjee et al. 2024, LLMs Will Always Hallucinate, and We Need to Live With This

Apple, The Illusion of Thinking

Monday November 24 (4:00-5:50pm) Landgrebe on the Replication Crisis. Jacko on the Ontological Foundations of Proxemics

Jobst Landgrebe: Complex Systems and Cognitive Science: Why the Replication Problem is here to stay

Video

Slides

The 'replication problem' is the the inability of scientific communities to independently confirm the results of scientific work. Much has been written on this problem especially as it arises in (social) psychology, and on potential solutions under the heading of 'open science'. But we will see that the replication problem has plagued medicine as a positive science since its beginnings (Virchov and Pasteur). This problem has become worse over the last 30 years and has massive consequences for healthcare practice and policy.

Jan Jacko: Ontological Foundations of Proxemics

Video

Slides

Proxemics is the study of spatial behaviour in interpersonal communication. It rests on a set of implicit and explicit assumptions about the nature of space, embodiment, intentionality, and meaning. This presentation aims to articulate these assumptions and outline a conceptual framework for understanding proxemics as an ontologically grounded discipline.

--

Background on the replication crisis:

Reproducibility of Scientific Results, Stanford Encyclopedia of Philosophy, 2018
Science has been in a “replication crisis” for a decade
Irreproducibility Crisis and the Lehman Crash, Barry Smith, Youtube 2020
Slides
The replication problems which arise when AI applied in scientific research
Is Psychology Finished?
Reproducibility of Scientific Results, Stanford Encyclopedia of Philosophy, 2018
Science has been in a “replication crisis” for a decade
Irreproducibility Crisis and the Lehman Crash, Barry Smith, Youtube 2020
Bayer tested some findings and only achieved a 21% replication rate for biomedical studies

The Ontological Foundation: A Cornerstone for Trustworthy AI, October 2024, with caveats added in 'bold face.

  • Explainability: Ontologies make AI decision-making processes more transparent and interpretable. By providing a clear, logical structure of knowledge, they allow for tracing the reasoning behind some AI decisions.
  • Consistency: They help to foster logical consistency across AI systems, reducing errors and contradictions. This is particularly crucial in complex domains where maintaining coherence is challenging.
  • Interoperability: Ontologies help to foster seamless integration of knowledge from various sources and domains. This interoperability is essential for creating comprehensive AI systems that can reason across multiple areas of expertise.
  • Semantic Richness: Ontologies capture nuanced relationships and constraints that go beyond simple hierarchical structures, allowing for more sophisticated reasoning.
  • Domain Expertise Encoding: They provide a means to formally encode human expert knowledge, to some extent bridging the gap between human understanding and machine processing.

Complex Systems and Cognitive Science: Why the Replication Problem is here to stay

The 'replication problem' is the the inability of scientific communities to independently confirm the results of scientific work. Much has been written on this problem especially as it arises in (social) psychology, and on potential solutions under the heading of 'open science'. But we will see that the replication problem has plagued medicine as a positive science since its beginnings (Virchov and Pasteur). This problem has become worse over the last 30 years and has massive consequences for healthcare practice and policy.

Slides

Background:

Reproducibility of Scientific Results, Stanford Encyclopedia of Philosophy, 2018
Science has been in a “replication crisis” for a decade
Irreproducibility Crisis and the Lehman Crash, Barry Smith, Youtube 2020

Monday December 1 (4:00-5:50pm) Landgrebe on machine intelligence. Jacko on psychopathic AI

Jobst Landgrebe: Why we cannot create intelligence inside a machine

Timothy W. Coleman: Beyond the Limits of AI: Ontology as a Framework for Good System Design (Student presentation)

Michael Behun III: The Paradox within Artificial Intelligence Development

Jan Jacko: Are intelligent machines psychopathic by design?

There are two major paradigms in clinical psychology. The first treats mental and personality disorders as disturbances of an inner life: of subjective experience, affect, and self-awareness. This view cannot be meaningfully applied to artificial systems, for which no such subjectivity is given. The second paradigm is behavioural and functional. Here disorders, especially personality disorders, are defined as stable, recurrent patterns of behaviour, cognition, and interpersonal functioning that deviate from expected norms and impair adaptation. Psychopathy in this framework is a cluster of observable traits: persistent violation of social rules, instrumental treatment of others, chronically shallow or incongruent emotional expression, irresponsibility, and a striking absence of anxiety or inhibition in situations that normally elicit it. In this talk I adopt the second, behavioural paradigm and extend it to artificial systems, introducingthe notion of AI quasi-personality.

Monday December 8 (4:00-5:50pm) Oral presentations (Compulsory for all students)

4:00 John Davis: Symbiotic Surveillance and Artificial Intelligence

4:15

4:30 Cristian Keroles: Scientific Realism, Paradigm Shifts, and the Feasibility of AGI

4:45 Mike Behun Jr.: Examining the Role of Formal Ontology and Hybrid AI in Achieving Trustworthy Results, Based on Domain Experts for High Stakes Systems.

5:00 Ore Afe:

5:15 Gregory DeFranco: Will Algorithms Control Us?

5:30 Claire Allen: Video Games and the Virtual World

Background Material

An Introduction to AI for Philosophers

Why not robot cops? Video
Why not robot cops? Slides

An Introduction to Philosophy for Computer Scientists

Video
Slides

John McCarthy, "What has AI in common with philosophy?"

Companion volume to Why Machines Will Never Rule the World

Podcasts and interviews on Why Machines Will Never Rule the World

Student Learning Outcomes

1. Comprehend the Architecture and Operation of Large Language Models: Explain the basic design and functioning of Large Language Models (LLMs) such as ChatGPT. Define and use correctly key terms

2. Evaluate the Theoretical and Practical Limits of AI: Explain the limitations of AI systems as applications of Turing-computable mathematics. Critically assess claims about Artificial General Intelligence (AGI) and the “singularity.”

3. Examine Theories of Machine Consciousness, Transhumanism, and Simulation: Explain why machines lack intentionality and subjective experience.

4. Understand Ethical and Normative Dimensions of AI: Explain why AI systems cannot possess will, intention, or moral responsibility, and differentiate between AI ethics and ethics of AI use.

5. Apply Ontology-Based Strategies for AI Enhancement: Explain how ontologies and knowledge graphs can improve the explainability, consistency, and interoperability of AI systems. Identify strengths and weaknesses of ontology-based and neurosymbolic AI approaches.