<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://ncorwiki.buffalo.edu/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Phismith</id>
	<title>NCOR Wiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://ncorwiki.buffalo.edu/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Phismith"/>
	<link rel="alternate" type="text/html" href="https://ncorwiki.buffalo.edu/index.php/Special:Contributions/Phismith"/>
	<updated>2026-04-11T17:56:40Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.43.8</generator>
	<entry>
		<id>https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75544</id>
		<title>Philosophy and Artificial Intelligence 2026</title>
		<link rel="alternate" type="text/html" href="https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75544"/>
		<updated>2026-04-08T10:44:11Z</updated>

		<summary type="html">&lt;p&gt;Phismith: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&#039;&#039;&#039;Philosophy and Artificial Intelligence 2026&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe and Barry Smith&lt;br /&gt;
&lt;br /&gt;
MAP, USI, Lugano, Spring 2026&lt;br /&gt;
&lt;br /&gt;
Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Draft Schedule&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction&lt;br /&gt;
:Tuesday May 5 (09:30-12:15) Transhumanism: The Ultimate Stage of Cartesianism&lt;br /&gt;
:Wednesday, May 6 (09:30 - 12:15) AI and the Theory of Dynamic and Complex Systems&lt;br /&gt;
:Thursday, May 7 20 (09:30 - 12:15) AI and History, Geography and Law&lt;br /&gt;
:Friday, May 8 (13:30 - 16:15) The Impossibility of Artificial General Intelligence &lt;br /&gt;
:Tuesday, May 12 (09:30 - 12:15) AI and Creativity&lt;br /&gt;
:Wednesday, May 13 (09:30 - 12:15) Artificial Intelligence and Human Intelligence; The Limits of AI and the Limits of Physics, Part 1&lt;br /&gt;
:Friday, May 15 (09:30-12:15) The Limits of AI and the Limits of Physics, Part 2&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Introduction&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Artificial Intelligence (AI) is the subfield of Computer Science devoted to developing programs that enable computers to display behavior that can (broadly) be characterized as intelligent. On the strong version, the ultimate goal of AI is to create what is called General Artificial Intelligence (AGI), by which is meant an artificial system that is as intelligent as a human being.&lt;br /&gt;
&lt;br /&gt;
Since its inception in the middle of the last century AI has enjoyed repeated cycles of enthusiasm and disappointment (AI summers and winters). Recent successes of ChatGPT and other Large Language Models (LLMs) have opened a new era of popularization of AI. For the first time, AI tools have been created which are immediately available to the wider population, who for the first time can have real hands-on experience of what AI can do.&lt;br /&gt;
&lt;br /&gt;
These developments in AI open up a series of questions such as:&lt;br /&gt;
&lt;br /&gt;
:Will the powers of AI continue to grow in the future, and if so will they ever reach the point where they can be said to have intelligence equivalent to or greater than that of a human being?&lt;br /&gt;
:Could we ever reach the point where we can accept the thesis that an AI system could have something like consciousness or sentience?&lt;br /&gt;
:Could we reach the point where an AI system could be said to behave ethically, or to have responsibility for its actions?&lt;br /&gt;
:Can quantum computers enable a stronger AI than what we have today?&lt;br /&gt;
:Can a computer have desires, a will, and emotions?&lt;br /&gt;
:Can a computer have responsibility for its behavior?&lt;br /&gt;
:Could a machine have something like a personal identity? Would I really survive if the contents of my brain were uploaded to the cloud?&lt;br /&gt;
&lt;br /&gt;
We will describe in detail how stochastic AI works, and consider these and a series of other questions at the borderlines of philosophy and AI. The class will close with presentations of papers on relevant topics given by students.&lt;br /&gt;
&lt;br /&gt;
Some of the material for this class is derived from our book&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Why Machines Will Never Rule the World: Artificial Intelligence without Fear&#039;&#039; (2nd edition, Routledge 2025).&lt;br /&gt;
and from the companion volume:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Symposium on Why Machines Will Never Rule the World&#039;&#039; — Guest editor, Janna Hastings, University of Zurich&lt;br /&gt;
which appeared as a special issue of the public access journal Cosmos + Taxis in early 2024.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Faculty&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe is the founder and CEO of Cognotekt, GmBH, an AI company based in Cologne specialised in the design and implementation of holistic AI solutions. He has 20 years experience in the AI field, 8 years as a management consultant and software architect. He has also worked as a physician and mathematician, and he views AI itself -- to the extent that it is not an elaborate hype -- as a branch of applied mathematics. CUrrently his primary focus is in the biomathematics of cancer.&lt;br /&gt;
&lt;br /&gt;
Barry Smith is one of the world&#039;s most widely cited philosophers. He has contributed primarily to the field of applied ontology, which means applying philosophical ideas derived from analytical metaphysics to the concrete practical problems which arise where attempts are made to compare or combine heterogeneous bodies of data.&lt;br /&gt;
&lt;br /&gt;
Grading&lt;br /&gt;
&lt;br /&gt;
:Essay with presentation: 70%&lt;br /&gt;
:Essay with no presentation: 85%&lt;br /&gt;
:Presentation: 15%&lt;br /&gt;
:Class Participation 15%&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Draft Schedule&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Program&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction==&lt;br /&gt;
Part 1: An introduction to the course: Impact of philosophy on AI; impact of AI on philosophy&lt;br /&gt;
&lt;br /&gt;
The types of AI&lt;br /&gt;
&lt;br /&gt;
Deterministic AI&lt;br /&gt;
Good old fashioned AI (GOFAI)&lt;br /&gt;
Basic stochastic AI&lt;br /&gt;
How regression works&lt;br /&gt;
Advanced stochastic AI&lt;br /&gt;
Neural networks and deep learning&lt;br /&gt;
Hybrid&lt;br /&gt;
Neurosymbolic AI&lt;br /&gt;
Background&lt;br /&gt;
Why machines will never rule the world, chapter 7 (chapter 8 of 2nd edition)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
Video&lt;br /&gt;
Why Machines Will Never Rule the World&lt;br /&gt;
Part 2: What are the essential marks of human intelligence?&lt;br /&gt;
&lt;br /&gt;
The classical psychological definitions of intelligence are:  &lt;br /&gt;
&lt;br /&gt;
A. the ability to adapt to new situations (applies both to humans and to animals) &lt;br /&gt;
B. a very general mental capability (possessed only by humans) that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience &lt;br /&gt;
Can a machine be intelligent in either of these senses?&lt;br /&gt;
&lt;br /&gt;
Slides on IQ tests&lt;br /&gt;
Readings:&lt;br /&gt;
&lt;br /&gt;
Linda S. Gottfredson. Mainstream Science on Intelligence. In: Intelligence 24 (1997), pp. 13–23.&lt;br /&gt;
Jobst Landgrebe and Barry Smith: There is no Artificial General Intelligence&lt;br /&gt;
Jobst Landgrebe: Deep reasoning, abstraction and planning&lt;br /&gt;
Background: Ersatz Definitions, Anthropomorphisms, and Pareidolia&lt;br /&gt;
&lt;br /&gt;
There&#039;s no &#039;I&#039; in &#039;AI&#039;, Steven Pemberton, Amsterdam, December 12, 2024&lt;br /&gt;
1. Esatz definitions: using words like &#039;thinks&#039; as in &#039;the machine is thinking&#039;, but with meanings quite different from those we use when talking about human beings. As when we define &#039;flying&#039; as moving through the air, and then jumping up and down and saying &amp;quot;look, I&#039;m flying!&amp;quot;&lt;br /&gt;
2. Pareidolia: a psychological phenomenon that causes people to see patterns, objects, or meaning in ambiguous or unrelated stimuli&lt;br /&gt;
3. If you can&#039;t spot irony, you&#039;re not intelligent&lt;br /&gt;
Tuesday February 18 (09:30-12:15) Limits and Dangers of AI?&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the technical fundamentals of AI: Methods, mathematics, usage as well&lt;br /&gt;
&lt;br /&gt;
2. Outlines the theory of complex systems documented in our book&lt;br /&gt;
&lt;br /&gt;
3. Shows why AI cannot model complex systems adequately and synoptically, and why they therefore cannot reach a level of intelligence equal to that of human beings.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Will AI Destroy Humanity? A Soho Forum Debate (Spoiler: Jobst won)&lt;br /&gt;
&lt;br /&gt;
R.V. Yampolskii, AI: Unexplainable, Unpredictable, Uncontrollable&lt;br /&gt;
&lt;br /&gt;
Arvind Narayanan and Sayash Kapoor, AI Snake Oil&lt;br /&gt;
&lt;br /&gt;
Arnold Schelsky, The Hype Book, especially Chapter 1.&lt;br /&gt;
&lt;br /&gt;
==Tuesday May 5 (09:30-12:15) Transhumanism: The Ultimate Stage of Cartesianism ==&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the full spectrum of transhumanism and its cultural origins.&lt;br /&gt;
&lt;br /&gt;
2. Debunk the feasibility of radically improving human beings via technology.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
TESCREALISM, or: why AI gods are so passionate about creating Artificial General Intelligence&lt;br /&gt;
Considering the existential risk of Artificial Superintelligence&lt;br /&gt;
Thursday, February 20 (9:30 - 12:15) Can a machine be conscious?&lt;br /&gt;
The machine will&lt;br /&gt;
&lt;br /&gt;
Computers cannot have a will, because computers don&#039;t give a damn. Therefore there can be no machine ethics&lt;br /&gt;
&lt;br /&gt;
The lack of the giving-a-damn-factor is taken by Yann LeCun as a reason to reject the idea that AI might pose an existential risk to humanity – an AI will have no desire for self-preservation “Almost half of CEOs fear A.I. could destroy humanity five to 10 years from now — but ‘A.I. godfather&#039; says an existential threat is ‘preposterously ridiculous’” Fortune, June 15, 2023. See also here.&lt;br /&gt;
Implications of the absence of a machine will:&lt;br /&gt;
&lt;br /&gt;
The problem of the singularity (when machines will take over from humans) will not arise&lt;br /&gt;
The idea of digital immortality will never be realized Slides&lt;br /&gt;
The idea that human beings are simulations can be rejected&lt;br /&gt;
There can be no AI ethics (only: ethics governing human beings when they use AI)&lt;br /&gt;
Fermi&#039;s paradox is solved&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Searle&#039;s Chinese Room Argument&lt;br /&gt;
Machines cannot have intentionality; they cannot have experiences which are about something.&lt;br /&gt;
&lt;br /&gt;
Searle: Minds, Brains, and Programs&lt;br /&gt;
&lt;br /&gt;
==	Wednesday, May 6 (09:30 - 12:15) AI and the Theory of Dynamic and Complex Systems==&lt;br /&gt;
&lt;br /&gt;
== Thursday, May 7 (09:30 - 12:15) AI and History, Geography and Law==&lt;br /&gt;
&lt;br /&gt;
== Friday, May 8 (13:30 - 16:15) The Impossibility of Artificial General Intelligence==&lt;br /&gt;
Personal knowledge&lt;br /&gt;
&lt;br /&gt;
Explicit, implicit, practical, personal and tacit knowledge&lt;br /&gt;
Video&lt;br /&gt;
Knowing how vs Knowing that&lt;br /&gt;
Personal knowledge and science&lt;br /&gt;
Creativity&lt;br /&gt;
Empathy&lt;br /&gt;
Entrepreneurship&lt;br /&gt;
Leadership and control (and ruling the world)&lt;br /&gt;
Complex Systems and Cognitive Science: Why the Replication Problem is here to stay&lt;br /&gt;
&lt;br /&gt;
The &#039;replication problem&#039; is the the inability of scientific communities to independently confirm the results of scientific work. Much has been written on this problem especially as it arises in (social) psychology, and on potential solutions under the heading of &#039;open science&#039;. But we will see that the replication problem has plagued medicine as a positive science since its beginnings (Virchov and Pasteur). This problem has become worse over the last 30 years and has massive consequences for healthcare practice and policy.&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
==Tuesday, May 12, 9:30 - 12:15)  ==&lt;br /&gt;
&lt;br /&gt;
==Wednesday, May 13 (13:30 - 16:30) Artificial Intelligence and Human Intelligence; The Limits of AI and the Limits of Physics, Part 1==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Personal knowledge&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/AI-and-Creativity Explicit, implicit, practical, personal and tacit knowledge]&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Practical-knowledge Video]&lt;br /&gt;
&lt;br /&gt;
:Knowing how vs Knowing that&lt;br /&gt;
:Personal knowledge and science&lt;br /&gt;
:Creativity&lt;br /&gt;
:Empathy&lt;br /&gt;
:Entrepreneurship&lt;br /&gt;
:Leadership and control (and ruling the world)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Complex Systems and Cognitive Science: Why the Replication Problem is here to stay&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:The &#039;replication problem&#039; is the the inability of scientific communities to independently confirm the results of scientific work. Much has been written on this problem especially as it arises in (social) psychology, and on potential solutions under the heading of &#039;open science&#039;. But we will see that the replication problem has plagued medicine as a positive science since its beginnings (Virchov and Pasteur). This problem has become worse over the last 30 years and has massive consequences for healthcare practice and policy. &lt;br /&gt;
&lt;br /&gt;
[https://buffalo.box.com/s/m3nu15lqjw0qhpqycz3wjsai057p9jf6 Slides]&lt;br /&gt;
&lt;br /&gt;
==Friday May 15 (09:30-12:15) The Limits of AI and the Limits of Physics, Part 2 ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/Living-in-a-Simulation Are we living in a simulation?]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Living-in-a-Simulation Video]&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/AI=and-the-Future The Future of Artificial Intelligence]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Repication:&lt;br /&gt;
:[https://plato.stanford.edu/entries/scientific-reproducibility Reproducibility of Scientific Results], &#039;&#039;Stanford Encyclopedia of Philosophy&#039;&#039;, 2018&lt;br /&gt;
:[https://www.vox.com/future-perfect/21504366/science-replication-crisis-peer-review-statistics Science has been in a “replication crisis” for a decade]&lt;br /&gt;
:[https://www.youtube.com/watch?v=HhDGkbw1FdwThe Irreproducibility Crisis and the Lehman Crash], Barry Smith, Youtube 2020&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:Room: A23&lt;br /&gt;
&lt;br /&gt;
:9:45 Julien Mommer, What is the Intelligence in &amp;quot;Artificial Intelligence&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
&#039;&#039;&#039;ChatGPT and its Future&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The Indispensability of Human Creativity&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Capabilities: The Interesting Version of the Story&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;Student Presentations&#039;&#039;&#039;&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Background Material==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to AI for Philosophers&#039;&#039;&#039;&lt;br /&gt;
:[https://www.youtube.com/watch?v=cmiY8_XVvzs Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Why-not-robot-cops Slides]&lt;br /&gt;
(AI experts are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to Philosophy for Computer Scientists&#039;&#039;&#039;&lt;br /&gt;
:[https://buffalo.box.com/v/What-is-philosophy Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Crash-Course-Introduction Slides]&lt;br /&gt;
(Philosophers are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[https://www.cp.eng.chula.ac.th/~prabhas/teaching/cbs-it-seminar/2012/aiphil-mccarthy.pdf John McCarthy, &amp;quot;What has AI in common with philosophy?&amp;quot;]&lt;/div&gt;</summary>
		<author><name>Phismith</name></author>
	</entry>
	<entry>
		<id>https://ncorwiki.buffalo.edu/index.php?title=Interviews_and_podcasts_on_%27%27Why_Machines_Will_Never_Rule_the_World%27%27&amp;diff=75543</id>
		<title>Interviews and podcasts on &#039;&#039;Why Machines Will Never Rule the World&#039;&#039;</title>
		<link rel="alternate" type="text/html" href="https://ncorwiki.buffalo.edu/index.php?title=Interviews_and_podcasts_on_%27%27Why_Machines_Will_Never_Rule_the_World%27%27&amp;diff=75543"/>
		<updated>2026-03-24T12:14:00Z</updated>

		<summary type="html">&lt;p&gt;Phismith: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;Interviews and Podcasts&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[https://www.futurity.org/artificial-intelligence-ai-2789642-2/ AI is cool, but will never reach human capability], Futurity podcast (August 12, 2022)&lt;br /&gt;
&lt;br /&gt;
[https://blog.apaonline.org/2022/09/23/why-machines-will-never-rule-the-world-artificial-intelligence-without-fear/ Blog of the American Philosophical Association: Interview with Charlie Taben] [https://www.youtube.com/watch?v=Zle7pJIIfFc Youtube], (August 30, 2022)&lt;br /&gt;
&lt;br /&gt;
[https://www.youtube.com/watch?v=T4HJi7dQzvg Systems Conversation] (with Dr Oliver Gao, Director, Systems Engineering, Cornell University, Ithaca, NY, (September 2, 2022) &lt;br /&gt;
&lt;br /&gt;
[https://https://www.youtube.com/watch?v=f7I6mtFkrOM &#039;&#039;&#039;AI is here, but will it rule us?&#039;&#039;&#039;], Wirkman Comments podcast with David Ramsey Steele, September 27, 2022.  &lt;br /&gt;
&lt;br /&gt;
[https://youtu.be/XeQHey8WFjY &#039;&#039;&#039;Lecture to Philosophy and AI Research Group&#039;&#039;&#039;], University of Zurich, 15 October, 2022&lt;br /&gt;
&lt;br /&gt;
[https://www.nas.org/blogs/media/video-will-machines-rule-the-world? Will Machines Rule the World?] NAS Podcast with Scott Turner,  [https://www.youtube.com/watch?v=3QtrVQ6hmdo Youtube] (October 4, 2022) &lt;br /&gt;
&lt;br /&gt;
[https://www.digitaltrends.com/computing/why-ai-will-never-rule-the-world/ Why AI will never rule the world], Interview by Luke Dormehl on Digital Trends [https://philpapers.org/archive/DORWAW-2.pdf Philpapers] (August 8, 2022) &lt;br /&gt;
&lt;br /&gt;
[https://www.youtube.com/watch?v=IMnWAuoucjo Walid Saba on Why Machines Will Never Rule the World], Machine Learning Street Talk, December 15, 2022 (review starts half way through)&lt;br /&gt;
&lt;br /&gt;
[https://www.cspicenter.com/p/why-the-singularity-might-never-come Why Machines Will Never Rule the World – On AI and Faith], Conversation between Jobst Landgrebe, Barry Smith and Rev. Jamie Franklin, Irreverend,  [https://youtu.be/43mM35X7x-c Youtube] (November 30, 2022)&lt;br /&gt;
&lt;br /&gt;
[https://www.cspicenter.com/p/why-the-singularity-might-never-come Why the Singularity Might Never Come]. Interview with Richard Hanania, Center for the Study of Partisanship and Ideology (January 30, 2023)[https://www.youtube.com/watch?v=wwVQQHoORg4 Youtube]&lt;br /&gt;
&lt;br /&gt;
[https://www.youtube.com/watch?v=bJ8tcposTek&amp;amp;list=PL-PSlrVaK5Iwe5CK06KCp1ZiCHNJcmJtN&amp;amp;index=1&amp;amp;pp=iAQB &amp;quot;Allmacht Künstliche Intelligenz?&amp;quot;], Politicum, TV Berlin, February 2023&lt;br /&gt;
&lt;br /&gt;
[https://www.youtube.com/watch?v=ze3J3yxVR5w&amp;amp;list=PL-PSlrVaK5Iwe5CK06KCp1ZiCHNJcmJtN&amp;amp;index=2&amp;amp;pp=iAQB &amp;quot;Bestimmte Ingenieure haben keine Ahnung in Mathematik&amp;quot;], Politicum, TV Berlin, February 2023&lt;br /&gt;
&lt;br /&gt;
[https://www.oval.media/narrative-132-jobst-landgrebe/ Elon Musks Irrweg], Interview with Robert Cibis (February 16, 2023)&lt;br /&gt;
&lt;br /&gt;
[https://www.youtube.com/watch?v=Y-yovYmd1_c Where there’s no will there’s no way], Interview with Alex Thomson, UKCommons (March 21, 2023)&lt;br /&gt;
&lt;br /&gt;
[https://www.youtube.com/watch?v=vO_JDTsrdiA Conversation with Jobst Landgrebe and Barry Smith: Why AI won’t rule the world], The Pangburn Hangout (May 5, 2023)&lt;br /&gt;
&lt;br /&gt;
[https://www.youtube.com/watch?v=3Ni3NiA29Pw AI and ChatGPT: Should we be worried?] Stever Peterson, Jobst Landgrebe and Barry Smith, National Association of Scholars (May 19, 2023)&lt;br /&gt;
&lt;br /&gt;
[https://dataskeptic.com/blog/episodes/2023/why-machines-will-never-rule-the-world Why Machines Will Never Rule the World], Jobst Landgrebe and Barry Smith, Interview with Kyle Polich, Data Sceptic [https://www.youtube.com/watch?v=mPJaRrJJ_zI Youtube] (May 29, 2023)&lt;br /&gt;
 &lt;br /&gt;
[https://www.youtube.com/watch?v=uHqvQrHQSk8 Why AI Will Never Rule the World], Fidias Podcast (July 21, 2023)&lt;br /&gt;
&lt;br /&gt;
[https://philpapers.org/rec/SOLLAN L’intelligenza artificiale non dominerà il mondo], interview with Barry Smith, &#039;&#039;Il sole de 24 ore&#039;&#039; (April 27, 2024)&lt;br /&gt;
&lt;br /&gt;
[https://www.youtube.com/watch?v=SJbXt02ZC-c Will Machines Rule the World?], Brain in a Vat podcast (November 3, 2024)&lt;br /&gt;
&lt;br /&gt;
[https://www.youtube.com/watch?v=GbHTLfTrjAs Jobst Landgrebe Doesn&#039;t Believe In AGI | Liron Reacts], Doom Debates (October 2024)&lt;br /&gt;
&lt;br /&gt;
[https://www.youtube.com/watch?v=0qNlp5Hf5dU Jobst Landgrebe -- Can AI TAKE OVER The World?], Two Stewards Podcast (January 17, 2025)&lt;br /&gt;
&lt;br /&gt;
[https://creators.spotify.com/pod/profile/ukcolumn/episodes/Jobst-Landgrebe-and-Barry-Smith-Why-Machines-Will-Never-Rule-the-World-e32d5l1 Interview with Jeremy Nell], UK Column (April 29, 2025) &lt;br /&gt;
&lt;br /&gt;
[https://rcr.media/episodes/tech-tuesday-jobst-landgrebe-the-real-limits-of-machine-intelligence-unveiled/ Jobst Landgrebe, The Real Limits Of Machine Intelligence Unveiled], Interview with Paul Brennan, RCR Podcast (June 3, 2025)&lt;br /&gt;
&lt;br /&gt;
[http://rcr.media/episodes/jobst-landgrebe-ai-reality-check-when-large-language-models-break-physics-laws/ Jobst Landgrebe, AI Reality Check: When Large Language Models Break Physics Laws], Interview with Paul Brennan, RCR Podcast (June 24, 2025)&lt;br /&gt;
&lt;br /&gt;
[https://www.youtube.com/watch?v=tAo8kO2CJNI&amp;amp;list=PLCobN2DevAuVMAKWcbBZewJLGuYIYQvW5 Jobst Landgrebe and Barry Smith, Why Machines Will Never Rule the World], UK Column, (May, 2025)&lt;br /&gt;
&lt;br /&gt;
[https://www.aporiamagazine.com/p/debate-can-intelligence-be-engineered Debate: Can Intelligence be Engineered]&lt;br /&gt;
&lt;br /&gt;
[https://www.aporiamagazine.com/p/debate-can-intelligence-be-engineered Debate with Jobst Landgrebe and Barry Smith: Can Intelligence Be Engineered?], Aporia Podcast, (November 10, 2025)&lt;br /&gt;
&lt;br /&gt;
(November 19, 2025)&lt;br /&gt;
&lt;br /&gt;
[https://www.youtube.com/watch?v=Sln9HLNpUZc Why A.I. Will Never Rule The World], Haman Nature Podcast (November 25, 2025)&lt;br /&gt;
&lt;br /&gt;
[https://www.youtube.com/watch?v=Rw92EyPBpCY A.I. Won&#039;t Take Over The World...Or Will It?], Haman Nature Podcast (December 5, 2025)&lt;br /&gt;
&lt;br /&gt;
[https://rcr.media/episodes/jobst-landgrebe-on-why-machines-will-never-rule-the-world-artificial-intelligence-without-fear/ Jobst Landgrebe On &#039;Why Machines Will Never Rule The World&#039;], RCR Podcast (December 5, 2025)&lt;br /&gt;
&lt;br /&gt;
[https://www.youtube.com/watch?v=2fMPRNTOjW8 Professor warnt: Die KI-Revolution wird scheitern!], Real Unit Schweiz, March 24, 2026&lt;/div&gt;</summary>
		<author><name>Phismith</name></author>
	</entry>
	<entry>
		<id>https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75542</id>
		<title>Philosophy and Artificial Intelligence 2026</title>
		<link rel="alternate" type="text/html" href="https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75542"/>
		<updated>2026-02-24T13:34:26Z</updated>

		<summary type="html">&lt;p&gt;Phismith: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&#039;&#039;&#039;Philosophy and Artificial Intelligence 2026&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe and Barry Smith&lt;br /&gt;
&lt;br /&gt;
MAP, USI, Lugano, Spring 2026&lt;br /&gt;
&lt;br /&gt;
Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Draft Schedule&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction&lt;br /&gt;
:Tuesday May 5 (09:30-12:15) Transhumanism: The Ultimate Stage of Cartesianism&lt;br /&gt;
:Wednesday, May 6 (09:30 - 12:15) AI and the Theory of Dynamic and Complex Systems&lt;br /&gt;
:Thursday, May 7 20 (09:30 - 12:15) AI and History, Geography and Law&lt;br /&gt;
:Friday, May 8 (13:30 - 16:15) The Impossibility of Artificial General Intelligence &lt;br /&gt;
:Tuesday, May 12 (09:30 - 12:15) AI and Creativity&lt;br /&gt;
:Wednesday, May 13 (09:30 - 12:15) Artificial Intelligence and Human Intelligence; The Limits of AI and the Limits of Physics, Part 1&lt;br /&gt;
:Friday, May 15 (09:30-12:15) The Limits of AI and the Limits of Physics, Part 2&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Introduction&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Artificial Intelligence (AI) is the subfield of Computer Science devoted to developing programs that enable computers to display behavior that can (broadly) be characterized as intelligent. On the strong version, the ultimate goal of AI is to create what is called General Artificial Intelligence (AGI), by which is meant an artificial system that is as intelligent as a human being.&lt;br /&gt;
&lt;br /&gt;
Since its inception in the middle of the last century AI has enjoyed repeated cycles of enthusiasm and disappointment (AI summers and winters). Recent successes of ChatGPT and other Large Language Models (LLMs) have opened a new era of popularization of AI. For the first time, AI tools have been created which are immediately available to the wider population, who for the first time can have real hands-on experience of what AI can do.&lt;br /&gt;
&lt;br /&gt;
These developments in AI open up a series of questions such as:&lt;br /&gt;
&lt;br /&gt;
:Will the powers of AI continue to grow in the future, and if so will they ever reach the point where they can be said to have intelligence equivalent to or greater than that of a human being?&lt;br /&gt;
:Could we ever reach the point where we can accept the thesis that an AI system could have something like consciousness or sentience?&lt;br /&gt;
:Could we reach the point where an AI system could be said to behave ethically, or to have responsibility for its actions?&lt;br /&gt;
:Can quantum computers enable a stronger AI than what we have today?&lt;br /&gt;
:Can a computer have desires, a will, and emotions?&lt;br /&gt;
:Can a computer have responsibility for its behavior?&lt;br /&gt;
:Could a machine have something like a personal identity? Would I really survive if the contents of my brain were uploaded to the cloud?&lt;br /&gt;
&lt;br /&gt;
We will describe in detail how stochastic AI works, and consider these and a series of other questions at the borderlines of philosophy and AI. The class will close with presentations of papers on relevant topics given by students.&lt;br /&gt;
&lt;br /&gt;
Some of the material for this class is derived from our book&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Why Machines Will Never Rule the World: Artificial Intelligence without Fear&#039;&#039; (2nd edition, Routledge 2025).&lt;br /&gt;
and from the companion volume:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Symposium on Why Machines Will Never Rule the World&#039;&#039; — Guest editor, Janna Hastings, University of Zurich&lt;br /&gt;
which appeared as a special issue of the public access journal Cosmos + Taxis in early 2024.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Faculty&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe is the founder and CEO of Cognotekt, GmBH, an AI company based in Cologne specialised in the design and implementation of holistic AI solutions. He has 20 years experience in the AI field, 8 years as a management consultant and software architect. He has also worked as a physician and mathematician, and he views AI itself -- to the extent that it is not an elaborate hype -- as a branch of applied mathematics. CUrrently his primary focus is in the biomathematics of cancer.&lt;br /&gt;
&lt;br /&gt;
Barry Smith is one of the world&#039;s most widely cited philosophers. He has contributed primarily to the field of applied ontology, which means applying philosophical ideas derived from analytical metaphysics to the concrete practical problems which arise where attempts are made to compare or combine heterogeneous bodies of data.&lt;br /&gt;
&lt;br /&gt;
Grading&lt;br /&gt;
&lt;br /&gt;
Essay with presentation: 70%&lt;br /&gt;
Essay with no presentation: 85%&lt;br /&gt;
Presentation: 15%&lt;br /&gt;
Class Participation 15%&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Draft Schedule&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Program&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction==&lt;br /&gt;
Part 1: An introduction to the course: Impact of philosophy on AI; impact of AI on philosophy&lt;br /&gt;
&lt;br /&gt;
The types of AI&lt;br /&gt;
&lt;br /&gt;
Deterministic AI&lt;br /&gt;
Good old fashioned AI (GOFAI)&lt;br /&gt;
Basic stochastic AI&lt;br /&gt;
How regression works&lt;br /&gt;
Advanced stochastic AI&lt;br /&gt;
Neural networks and deep learning&lt;br /&gt;
Hybrid&lt;br /&gt;
Neurosymbolic AI&lt;br /&gt;
Background&lt;br /&gt;
Why machines will never rule the world, chapter 7 (chapter 8 of 2nd edition)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
Video&lt;br /&gt;
Why Machines Will Never Rule the World&lt;br /&gt;
Part 2: What are the essential marks of human intelligence?&lt;br /&gt;
&lt;br /&gt;
The classical psychological definitions of intelligence are:  &lt;br /&gt;
&lt;br /&gt;
A. the ability to adapt to new situations (applies both to humans and to animals) &lt;br /&gt;
B. a very general mental capability (possessed only by humans) that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience &lt;br /&gt;
Can a machine be intelligent in either of these senses?&lt;br /&gt;
&lt;br /&gt;
Slides on IQ tests&lt;br /&gt;
Readings:&lt;br /&gt;
&lt;br /&gt;
Linda S. Gottfredson. Mainstream Science on Intelligence. In: Intelligence 24 (1997), pp. 13–23.&lt;br /&gt;
Jobst Landgrebe and Barry Smith: There is no Artificial General Intelligence&lt;br /&gt;
Jobst Landgrebe: Deep reasoning, abstraction and planning&lt;br /&gt;
Background: Ersatz Definitions, Anthropomorphisms, and Pareidolia&lt;br /&gt;
&lt;br /&gt;
There&#039;s no &#039;I&#039; in &#039;AI&#039;, Steven Pemberton, Amsterdam, December 12, 2024&lt;br /&gt;
1. Esatz definitions: using words like &#039;thinks&#039; as in &#039;the machine is thinking&#039;, but with meanings quite different from those we use when talking about human beings. As when we define &#039;flying&#039; as moving through the air, and then jumping up and down and saying &amp;quot;look, I&#039;m flying!&amp;quot;&lt;br /&gt;
2. Pareidolia: a psychological phenomenon that causes people to see patterns, objects, or meaning in ambiguous or unrelated stimuli&lt;br /&gt;
3. If you can&#039;t spot irony, you&#039;re not intelligent&lt;br /&gt;
Tuesday February 18 (09:30-12:15) Limits and Dangers of AI?&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the technical fundamentals of AI: Methods, mathematics, usage as well&lt;br /&gt;
&lt;br /&gt;
2. Outlines the theory of complex systems documented in our book&lt;br /&gt;
&lt;br /&gt;
3. Shows why AI cannot model complex systems adequately and synoptically, and why they therefore cannot reach a level of intelligence equal to that of human beings.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Will AI Destroy Humanity? A Soho Forum Debate (Spoiler: Jobst won)&lt;br /&gt;
&lt;br /&gt;
R.V. Yampolskii, AI: Unexplainable, Unpredictable, Uncontrollable&lt;br /&gt;
&lt;br /&gt;
Arvind Narayanan and Sayash Kapoor, AI Snake Oil&lt;br /&gt;
&lt;br /&gt;
Arnold Schelsky, The Hype Book, especially Chapter 1.&lt;br /&gt;
&lt;br /&gt;
==Tuesday May 5 (09:30-12:15) Transhumanism: The Ultimate Stage of Cartesianism ==&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the full spectrum of transhumanism and its cultural origins.&lt;br /&gt;
&lt;br /&gt;
2. Debunk the feasibility of radically improving human beings via technology.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
TESCREALISM, or: why AI gods are so passionate about creating Artificial General Intelligence&lt;br /&gt;
Considering the existential risk of Artificial Superintelligence&lt;br /&gt;
Thursday, February 20 (9:30 - 12:15) Can a machine be conscious?&lt;br /&gt;
The machine will&lt;br /&gt;
&lt;br /&gt;
Computers cannot have a will, because computers don&#039;t give a damn. Therefore there can be no machine ethics&lt;br /&gt;
&lt;br /&gt;
The lack of the giving-a-damn-factor is taken by Yann LeCun as a reason to reject the idea that AI might pose an existential risk to humanity – an AI will have no desire for self-preservation “Almost half of CEOs fear A.I. could destroy humanity five to 10 years from now — but ‘A.I. godfather&#039; says an existential threat is ‘preposterously ridiculous’” Fortune, June 15, 2023. See also here.&lt;br /&gt;
Implications of the absence of a machine will:&lt;br /&gt;
&lt;br /&gt;
The problem of the singularity (when machines will take over from humans) will not arise&lt;br /&gt;
The idea of digital immortality will never be realized Slides&lt;br /&gt;
The idea that human beings are simulations can be rejected&lt;br /&gt;
There can be no AI ethics (only: ethics governing human beings when they use AI)&lt;br /&gt;
Fermi&#039;s paradox is solved&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Searle&#039;s Chinese Room Argument&lt;br /&gt;
Machines cannot have intentionality; they cannot have experiences which are about something.&lt;br /&gt;
&lt;br /&gt;
Searle: Minds, Brains, and Programs&lt;br /&gt;
&lt;br /&gt;
==	Wednesday, May 6 (09:30 - 12:15) AI and the Theory of Dynamic and Complex Systems==&lt;br /&gt;
&lt;br /&gt;
== Thursday, May 7 (09:30 - 12:15) AI and History, Geography and Law==&lt;br /&gt;
&lt;br /&gt;
== Friday, May 8 (13:30 - 16:15) The Impossibility of Artificial General Intelligence==&lt;br /&gt;
Personal knowledge&lt;br /&gt;
&lt;br /&gt;
Explicit, implicit, practical, personal and tacit knowledge&lt;br /&gt;
Video&lt;br /&gt;
Knowing how vs Knowing that&lt;br /&gt;
Personal knowledge and science&lt;br /&gt;
Creativity&lt;br /&gt;
Empathy&lt;br /&gt;
Entrepreneurship&lt;br /&gt;
Leadership and control (and ruling the world)&lt;br /&gt;
Complex Systems and Cognitive Science: Why the Replication Problem is here to stay&lt;br /&gt;
&lt;br /&gt;
The &#039;replication problem&#039; is the the inability of scientific communities to independently confirm the results of scientific work. Much has been written on this problem especially as it arises in (social) psychology, and on potential solutions under the heading of &#039;open science&#039;. But we will see that the replication problem has plagued medicine as a positive science since its beginnings (Virchov and Pasteur). This problem has become worse over the last 30 years and has massive consequences for healthcare practice and policy.&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
==Tuesday, May 12, 9:30 - 12:15)  ==&lt;br /&gt;
&lt;br /&gt;
==Wednesday, May 13 (13:30 - 16:30) Artificial Intelligence and Human Intelligence; The Limits of AI and the Limits of Physics, Part 1==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Personal knowledge&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/AI-and-Creativity Explicit, implicit, practical, personal and tacit knowledge]&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Practical-knowledge Video]&lt;br /&gt;
&lt;br /&gt;
:Knowing how vs Knowing that&lt;br /&gt;
:Personal knowledge and science&lt;br /&gt;
:Creativity&lt;br /&gt;
:Empathy&lt;br /&gt;
:Entrepreneurship&lt;br /&gt;
:Leadership and control (and ruling the world)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Complex Systems and Cognitive Science: Why the Replication Problem is here to stay&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:The &#039;replication problem&#039; is the the inability of scientific communities to independently confirm the results of scientific work. Much has been written on this problem especially as it arises in (social) psychology, and on potential solutions under the heading of &#039;open science&#039;. But we will see that the replication problem has plagued medicine as a positive science since its beginnings (Virchov and Pasteur). This problem has become worse over the last 30 years and has massive consequences for healthcare practice and policy. &lt;br /&gt;
&lt;br /&gt;
[https://buffalo.box.com/s/m3nu15lqjw0qhpqycz3wjsai057p9jf6 Slides]&lt;br /&gt;
&lt;br /&gt;
==Friday May 15 (09:30-12:15) The Limits of AI and the Limits of Physics, Part 2 ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/Living-in-a-Simulation Are we living in a simulation?]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Living-in-a-Simulation Video]&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/AI=and-the-Future The Future of Artificial Intelligence]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Repication:&lt;br /&gt;
:[https://plato.stanford.edu/entries/scientific-reproducibility Reproducibility of Scientific Results], &#039;&#039;Stanford Encyclopedia of Philosophy&#039;&#039;, 2018&lt;br /&gt;
:[https://www.vox.com/future-perfect/21504366/science-replication-crisis-peer-review-statistics Science has been in a “replication crisis” for a decade]&lt;br /&gt;
:[https://www.youtube.com/watch?v=HhDGkbw1FdwThe Irreproducibility Crisis and the Lehman Crash], Barry Smith, Youtube 2020&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:Room: A23&lt;br /&gt;
&lt;br /&gt;
:9:45 Julien Mommer, What is the Intelligence in &amp;quot;Artificial Intelligence&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
&#039;&#039;&#039;ChatGPT and its Future&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The Indispensability of Human Creativity&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Capabilities: The Interesting Version of the Story&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;Student Presentations&#039;&#039;&#039;&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Background Material==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to AI for Philosophers&#039;&#039;&#039;&lt;br /&gt;
:[https://www.youtube.com/watch?v=cmiY8_XVvzs Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Why-not-robot-cops Slides]&lt;br /&gt;
(AI experts are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to Philosophy for Computer Scientists&#039;&#039;&#039;&lt;br /&gt;
:[https://buffalo.box.com/v/What-is-philosophy Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Crash-Course-Introduction Slides]&lt;br /&gt;
(Philosophers are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[https://www.cp.eng.chula.ac.th/~prabhas/teaching/cbs-it-seminar/2012/aiphil-mccarthy.pdf John McCarthy, &amp;quot;What has AI in common with philosophy?&amp;quot;]&lt;/div&gt;</summary>
		<author><name>Phismith</name></author>
	</entry>
	<entry>
		<id>https://ncorwiki.buffalo.edu/index.php?title=Main_Page&amp;diff=75541</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://ncorwiki.buffalo.edu/index.php?title=Main_Page&amp;diff=75541"/>
		<updated>2026-02-24T12:21:35Z</updated>

		<summary type="html">&lt;p&gt;Phismith: /* News */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The goal of the [https://ubwp.buffalo.edu/ncor/ National Center for Ontological Research] is to advance ontological investigation within the United States. NCOR serves as a vehicle to coordinate, to enhance, to publicize, and to seek funding for ontological research activities. It lays a special focus on ontology training and on the establishment of tools and measures for quality assurance of ontologies. NCOR provides ontology services to multiple organizations, including the US Department of Defense.&lt;br /&gt;
&lt;br /&gt;
== Events ==&lt;br /&gt;
&lt;br /&gt;
See &#039;&#039;&#039;[http://ncorwiki.buffalo.edu/index.php/Newsevents here]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
For &#039;&#039;&#039;past events&#039;&#039;&#039; see [http://ncorwiki.buffalo.edu/index.php/Past_Events here]&lt;br /&gt;
&lt;br /&gt;
For Buffalo Toronto Ontology Alliance (BoaT) see [https://urbandatacentre.ca/boat]&lt;br /&gt;
&lt;br /&gt;
==News==&lt;br /&gt;
&lt;br /&gt;
[https://www.buffalo.edu/content/shared/university/news/news-center-releases/2026/01/TMD-NIH-grant.html UB is part of $17 million NIH grant to study temporomandibular disorders], January 5, 2026&lt;br /&gt;
&lt;br /&gt;
[https://www.buffalo.edu/ubnow/stories/2025/10/ontology-ms.html UB to offer a fully online graduate degree in ontology], October 31, 2025&lt;br /&gt;
&lt;br /&gt;
[https://www.linkedin.com/feed/update/urn:li:activity:7386854194780983297/ Are we witnessing the long-awaited alignment between DOLCE and BFO?], Jérémy Ravenel (naas.ai), October 16, 2025&lt;br /&gt;
&lt;br /&gt;
[https://www.linkedin.com/pulse/standing-giants-shoulders-what-happens-when-formal-ontology-truman-iefkc/ Tavi Truman: Standing on Giants&#039; Shoulders: What Happens When Formal Ontology Meets Modern Verification?] &lt;br /&gt;
&lt;br /&gt;
[https://www.linkedin.com/posts/jeremyravenel_why-is-bfo-so-powerful-bfo-basic-formal-activity-7250607560976732163-d7tZ/ Jérémy Ravenel (naas.ai): Why is BFO so powerful?]&lt;br /&gt;
&lt;br /&gt;
[https://buffalo.box.com/s/k6kl09zg8idud2jg1gkkv86mao7i8l34 BFO mandated by DOD-IC Joint Enterprise Standards Committee (JESC)], November 7, 2024&lt;br /&gt;
&lt;br /&gt;
[http://www.techguide.org/barry-smith Techguide Podcast], Careers in Tech for non-STEM students, October 30, 2024&lt;br /&gt;
&lt;br /&gt;
[https://apablog.substack.com/p/commercializing-ontology-lucrative APA Blog interview with Barry Smith and John Beverley], October 4, 2024.&lt;br /&gt;
&lt;br /&gt;
[https://www.buffalo.edu/news/releases/2024/02/department-of-defense-ontology.html DOD, Intelligence Community adopt resource developed by UB ontologists], Bert Gambini, UBNow, February 29, 2024.&lt;br /&gt;
&lt;br /&gt;
[https://bnnbreaking.com/world/us/us-defense-and-intelligence-to-adopt-bfo-and-cco-standards-for-enhanced-data-management U.S. Defense and Intelligence to Adopt BFO and CCO Standards for Enhanced Data Management],Shivani Chauhan, 28 Feb 2024.&lt;br /&gt;
&lt;br /&gt;
[https://buffalo.app.box.com/v/KI-und-Philosophie Article on BFO in the &#039;&#039;Frankfurter Allgemeine Zeitung&#039;&#039;, September 28, 2022, p. N3]. Translation of opening paragraph:&lt;br /&gt;
&lt;br /&gt;
:Industry standards are not usually associated with philosophy or the humanities. That is why the new ISO/IEC 21838 standard conceals a minor scientific-historical sensation. Because for the first time, a philosophical theory has now been declared an industry standard, namely: the &amp;quot;Basic Formal Ontology&amp;quot;, BFO for short. When you try to pronounce this acronym, it sounds a lot like &amp;quot;Buffalo,&amp;quot; and that&#039;s no coincidence. Because Barry Smith, the main brain behind this norm, is the Julian Park Professor of Philosophy at the University of Buffalo in northern New York State, not far from Niagara Falls ...&lt;br /&gt;
&lt;br /&gt;
:For full text see [https://buffalo.box.com/v/KI-und-Philosophie here].&lt;br /&gt;
&lt;br /&gt;
[https://blog.apaonline.org/2022/09/15/careers-in-ontology-an-interview-with-professor-barry-smith/ Interview with Barry Smith on &#039;&#039;&#039;Careers in Ontology&#039;&#039;&#039;], September 15, 2022&lt;br /&gt;
&lt;br /&gt;
[[Interviews and podcasts on &#039;&#039;Why Machines Will Never Rule the World&#039;&#039;]]&lt;br /&gt;
&lt;br /&gt;
[https://www.routledge.com/9781032309934 New book on limits of AI published], August 12, 2022.&lt;br /&gt;
&lt;br /&gt;
[https://www.dpaonthenet.net/article/192369/Machines-ruling-the-world--Impossible--say-researchers.aspx Machines ruling the world? Impossible, say researchers]&lt;br /&gt;
&lt;br /&gt;
[https://www.amazon.com/gp/customer-reviews/R35NHUZZQN8226?ref=pf_vv_at_pdctrvw_srp This will totally blow your mind]&lt;br /&gt;
&lt;br /&gt;
[https://www.buffalo.edu/news/releases/2022/04/0290.html UB professor’s ontology work recognized in an international standard], April 29, 2022.&lt;br /&gt;
&lt;br /&gt;
[https://www.prnewswire.com/news-releases/oagi-and-iof-agree-to-produce-industrial-ontologies-301231565.html Press release on launch of Industrial Ontologies Foundry], February 19, 2021.&lt;br /&gt;
&lt;br /&gt;
[https://www.youtube.com/watch?v=0giPMMoKR9s Video recording of talk by Barry Smith on &amp;quot;Defining Intelligence&amp;quot;], February 17, 2021&lt;br /&gt;
&lt;br /&gt;
[https://www.prnewswire.com/news-releases/oagi-and-iof-agree-to-produce-industrial-ontologies-301231565.html Press-release launching the new Industrial Ontologies Foundry], February 19, 2021&lt;br /&gt;
&lt;br /&gt;
[https://ncor-brasil.org/about/ NCOR-Brasil] established, December 1, 2020&lt;br /&gt;
&lt;br /&gt;
[http://medicine.buffalo.edu/news_and_events/news/2020/07/smith-ontology-covid-11561.html Using Ontology as Powerful Weapon in COVID-19 Fight], July 14, 2020&lt;br /&gt;
&lt;br /&gt;
[http://www.buffalo.edu/news/releases/2020/06/016.html Leveraging a powerful weapon in the fight against COVID-19 — ontology], June 10, 2020&lt;br /&gt;
&lt;br /&gt;
[http://www.buffalo.edu/ubnow/campus.host.html/content/shared/university/news/ub-reporter-articles/stories/2018/04/smith-capabilities-workshop.detail.html UB workshop to address human and machine capabilities], April 20, 2018&lt;br /&gt;
&lt;br /&gt;
[https://www.buffalo.edu/ctsi/ctsi-news.host.html/content/shared/www/ctsi/articles/academic_articles/working-group-seeks-to-extend-the-depth-and-functionality-of-bio.detail.html Working group seeks to extend the depth and functionality of biomedical ontologies], October 14, 2017 &lt;br /&gt;
&lt;br /&gt;
[http://www.buffalo.edu/cas/philosophy/news/latestnews/2016-win-ontology.html Barry Smith wins 2016 IAOA Ontology Competition], August 18, 2016&lt;br /&gt;
&lt;br /&gt;
[https://medicine.buffalo.edu/news_and_events/news.host.html/content/shared/smbs/news/2016/01/jensen-doctoral-un-5573.detail.html Doctoral Candidate Invited to Work on United Nations Project], January 4, 2016&lt;br /&gt;
&lt;br /&gt;
[http://xbrl.squarespace.com/journal/2013/2/23/advantages-of-financial-report-ontology-in-accounting-resear.html Advantages of the Financial Report Ontology in Accounting Research], February 23, 2013&lt;br /&gt;
&lt;br /&gt;
[http://ontology.buffalo.edu/IMMPORT/UB-Press-Release-2013.pdf UB Ontologists Win Bioinformatics Integration Award to Support National Institutes of Health]&lt;br /&gt;
&lt;br /&gt;
[[Announcing Clinical and Translational Science Ontology Affinity Group]]&lt;br /&gt;
&lt;br /&gt;
[http://www.sciencedaily.com/releases/2012/08/120820161058.htm?utm_source=feedburner&amp;amp;utm_medium=feed&amp;amp;utm_campaign=Feed%3A+sciencedaily+(ScienceDaily%3A+Latest+Science+News) Information Overload in the Era of Big Data]&lt;br /&gt;
&lt;br /&gt;
[http://www.kurzweilai.net/botanists-building-ontologies-to-cope-with-information-overload Botanists building ontologies to cope with information overload]&lt;br /&gt;
&lt;br /&gt;
[[UB Applied Informatics Portal]] unveiled.&lt;br /&gt;
&lt;br /&gt;
==Advertising MS program==&lt;br /&gt;
:UBNow&lt;br /&gt;
:https://www.buffalo.edu/grad/programs/philosophy-ma.html &amp;lt;-- needs counterpart for phd program&lt;br /&gt;
:dept webpage&lt;br /&gt;
&lt;br /&gt;
==[https://ncorwiki.buffalo.edu/index.php/Education Education]==&lt;br /&gt;
[[Ontology 101]]&lt;br /&gt;
&lt;br /&gt;
==Online Courses==&lt;br /&gt;
&lt;br /&gt;
[http://ncorwiki.buffalo.edu/index.php/Education Barry Smith]&lt;br /&gt;
&lt;br /&gt;
[http://www.referent-tracking.com/RTU/ceusters_vita.html#teaching Werner Ceusters]&lt;br /&gt;
&lt;br /&gt;
==Defining &#039;Ontology&#039;==&lt;br /&gt;
&lt;br /&gt;
An ontology is a representation of some part of reality, (e.g. medicine, social reality, physics, etc.).  Smith states that: “Ontology is the science of what is, of the kinds and structures of objects, properties, events, processes and relations in every area of reality…Ontology seeks to provide a definitive and exhaustive classification of entities in all spheres of being.”  To be an accurate representation of reality an ontology includes the types of entities and events in a given domain (along with their definitions) arranged in a hierarchical structure, along with relations (such as part-of, depends-on, caused-by, etc. where necessary).  Ontologies enable the formulation of robust and shareable descriptions of a given domain by providing a common controlled vocabulary for doctrine writers, IT Developers, and war-fighters alike, thereby allowing these disparate communities to communicate with each other.  An ontology should be a shared resource between communities, and its continued collaborative development should support the integration of information and facilitate knowledge discovery.  These two goals are realized by ensuring wide dissemination of the ontology, so that it will be used by many stakeholders, and its terms will be correspondingly familiar and readily used for search.&lt;br /&gt;
&lt;br /&gt;
== Basic Formal Ontology 2.0 ==&lt;br /&gt;
&lt;br /&gt;
[http://ncorwiki.buffalo.edu/index.php/Basic_Formal_Ontology_2.0 Basic Formal Ontology 2.0]&lt;br /&gt;
&lt;br /&gt;
== Basic Formal Ontology 2020 ==&lt;br /&gt;
&lt;br /&gt;
[http://ncorwiki.buffalo.edu/index.php/BFO_2020 BFO 2020]&lt;br /&gt;
&lt;br /&gt;
==Buffalo Toronto Ontology Alliance (BoaT)==&lt;br /&gt;
&lt;br /&gt;
*[https://urbandatacentre.ca/boat BoaT Home Page]&lt;br /&gt;
&lt;br /&gt;
*[https://ncorwiki.buffalo.edu/index.php/Ontology_Day_(with_visitors_from_Toronto),_October_24,_2022 Inaugural meeting, October 24, 2022]&lt;br /&gt;
&lt;br /&gt;
*[https://ncorwiki.buffalo.edu/index.php/List_of_Toronto_ontology_contributions_(as_of_November_1,_2022 University of Toronto ontologies]&lt;br /&gt;
&lt;br /&gt;
*[https://ncorwiki.buffalo.edu/index.php/Basic_Formal_Ontology_Summit_Meeting BFO Summit Meeting, May 23-25, 2023] Includes UB-Toronto-DHS session on government ontologies]&lt;br /&gt;
&lt;br /&gt;
==Why Machines Will Never Rule the World==&lt;br /&gt;
&lt;br /&gt;
See [https://ncorwiki.buffalo.edu/index.php/Why_Machines here]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Press Items and Notices&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[https://www.futurity.org/artificial-intelligence-ai-2789642-2/ &#039;&#039;&#039;AI is cool, but will never reach human capability&#039;&#039;&#039;], Bert Gambini, August 25, 2022&lt;br /&gt;
&lt;br /&gt;
[https://www.digitaltrends.com/computing/why-ai-will-never-rule-the-world/ Why AI will never rule the world] &#039;&#039;&#039;Interview by Luke Dormehl on Digital Trends&#039;&#039;&#039;, September 25, 2022 [https://buffalo.box.com/v/Digital-trends-revised (Recording)]&lt;br /&gt;
&lt;br /&gt;
[https://calendar.buffalo.edu/event/iad-distinguished-speaker-series--why-machines-will-never-rule-the-world/ UB Lecture], September 20, 2022&lt;br /&gt;
&lt;br /&gt;
[https://philpapers.org/rec/SOLLAN L’intelligenza artificiale non dominerà il mondo], interview with Barry Smith, &#039;&#039;Il sole de 24 ore&#039;&#039;, April 27, 2024.&lt;br /&gt;
&lt;br /&gt;
==The Philosophome==&lt;br /&gt;
&lt;br /&gt;
[http://ontology.buffalo.edu/philosophome/index_files/philosophome.html Philosophome Website]&lt;br /&gt;
&lt;br /&gt;
[[Philosophome | Philosophome Wiki]]&lt;br /&gt;
&lt;br /&gt;
==Semantics of Biodiversity==&lt;br /&gt;
&lt;br /&gt;
Paper: [https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0089606 Semantics in Support of Biodiversity Knowledge Discovery (PLoS ONE, 2013)]&lt;br /&gt;
&lt;br /&gt;
Video Presentations from: [http://biocodecommons.org/workshops/sob.html Semantics of Biodiversity Workshop (2012)] &lt;br /&gt;
&lt;br /&gt;
::[http://www.youtube.com/watch?v=ZrHYi7mgF9g Ontologies as a method of viewing data]&lt;br /&gt;
&lt;br /&gt;
::[http://www.youtube.com/watch?v=Fot1dOPLv_c Basic Formal Ontology (BFO)]&lt;br /&gt;
&lt;br /&gt;
::[http://www.youtube.com/watch?v=rWy3C0WmpZM How to build an ontology with BFO]&lt;br /&gt;
&lt;br /&gt;
::[http://www.youtube.com/watch?v=kaG92j0WqmI Tracking referents with Instance Unique Identifiers (IUIs)]&lt;br /&gt;
&lt;br /&gt;
::[http://www.youtube.com/watch?v=fHP0Dlk5wuo Tracking Changes in Our Understanding of Reality: Reality vs. Beliefs]&lt;br /&gt;
&lt;br /&gt;
::[http://www.youtube.com/watch?v=Of6bj28MQhY Darwin Core (DwC) and Basic Formal Ontology: Putting it All Together]&lt;br /&gt;
&lt;br /&gt;
:::Building Darwin Core top-down in BFO&lt;br /&gt;
:::Organisms, photographs, media&lt;br /&gt;
:::How to re-use ontologies&lt;br /&gt;
:::Principles of singular nouns, secondary use, understandability&lt;br /&gt;
:::Writing good definitions (DwC Examples)&lt;br /&gt;
:::Management strategies&lt;br /&gt;
:::Ontologies for reuse (BFO, EnvO, IDO, OBI, Plant Ontology , Uberon, IAO)&lt;br /&gt;
:::Educational resources (OBI, Protege, BFO)&lt;br /&gt;
&lt;br /&gt;
==Finance and Economics==&lt;br /&gt;
&lt;br /&gt;
[http://www.slideshare.net/BarrySmith3/an-application-of-bfo-to-services An Application of Basic Formal Ontology to the Ontology of Services and Commodities], Institute for Business Informatics, University of Koblenz, Germany July 23, 2013&lt;br /&gt;
&lt;br /&gt;
Barry Smith, [http://www.slideshare.net/BarrySmith3/2012-fima-talk Reference Data Integration: A Strategy for the Future], Financial Reference Data Management Conference (FIMA), New York, March 2012&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;The Wernicke Ontology Principle&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Wernicke is an ontology-dependent AI system used to automate recurring business processes. Wernicke is based on formal logic developed by Jobst Landgrebe and co-workers at Cognotekt. Its ontologies do not have an Aristotelian taxonomic structure, but are fully axiomatised and logically describe the syntactic structure of recurring language patterns in the Prolog-subset of first order logic. The use of terms in two or more axiomatic definitions of ontological entities creates an implicit network structure within the ontology.&lt;br /&gt;
&lt;br /&gt;
Examples (in German)&lt;br /&gt;
&lt;br /&gt;
1. Implication relations for verbs and verb phrases. (There are hundreds of examples of such formulae in each Wernicke ontology.)&lt;br /&gt;
&lt;br /&gt;
  ((zahlung(Y) AND nachkommen(Z) AND verb(Z,X,Y)) IMPL zahlen(Z))&lt;br /&gt;
  ((geld(Y) AND schicken(Z) AND (verb(Z,X) OR verb(Z,X,Y1))) IMPL zahlen(Z))&lt;br /&gt;
  ((kosten(Y) AND tragen(Z) AND verb(Z,X,Y)) IMPL zahlen(Z))&lt;br /&gt;
  ((überweisungsträger(Y) AND einwerfen(Z) AND verb(Z,X,Y)) IMPL zahlen(Z))&lt;br /&gt;
  ((bringen(Z) AND ausgleich(A) AND zum(B) AND mod(B,A,Z) AND verb(Z,X,Y)) IMPL zahlen(Z))&lt;br /&gt;
  ((möglich(A) AND mod(A,Z) AND sein(Z) AND (verb(Z,X) OR verb(Z,X,Y))) IMPL möglichsein(Z))  &lt;br /&gt;
  ((bitten(Z) AND mod(B,A,Z) AND möglichkeit(A) AND verb(Z,X,Y)) IMPL möglichsein(Z))&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. Temporal structures&lt;br /&gt;
&lt;br /&gt;
  ((übermorgen(W) AND (Y=2)) IMPL zeitabstand(W,in,Y,tagen))&lt;br /&gt;
  ((morgen(W) AND (Y=1)) IMPL zeitabstand(W,in,Y,tagen))&lt;br /&gt;
  ((heute(W) AND (Y=0)) IMPL zeitabstand(W,in,Y,tagen))&lt;br /&gt;
  ((gestern(W) AND (Y=1)) IMPL zeitabstand(W,vor,Y,tagen))&lt;br /&gt;
  ((vorgestern(W) AND (Y=2)) IMPL zeitabstand(W,vor,Y,tagen))&lt;br /&gt;
&lt;br /&gt;
3. Domain pattern formulae (ontologic entities)&lt;br /&gt;
&lt;br /&gt;
  past payment a: ((zahlung(X) OR geld(X)) AND rausgehen(Z) AND (I=vergangen) AND verb(Z,X) AND vergangentemp(Z))&lt;br /&gt;
  past payment b: ((zahlung(Y) AND tätigen(Z) AND verb(Z,X,Y) AND (I=vergangen) AND vergangentemp(Z))&lt;br /&gt;
  past payment c: ((sein(Z) AND (betrag(X) OR forderung(X)) AND zahlen(A) AND mod(A,Z) AND (I=vergangen)&lt;br /&gt;
  AND verb(Z,X) AND NOT temp_mod(Z, praet, konj2)&lt;br /&gt;
&lt;br /&gt;
== Information Ontology==&lt;br /&gt;
&lt;br /&gt;
[http://ncorwiki.buffalo.edu/index.php/BFO-Based_Data_and_Information_Ontologies BFO-based data and information ontologies]&lt;br /&gt;
&lt;br /&gt;
==Military and Intelligence Ontology==&lt;br /&gt;
&lt;br /&gt;
[[Common Core Ontologies]]&lt;br /&gt;
&lt;br /&gt;
JFCOM: [[Semantic Web and Joint Training]] (2010)&lt;br /&gt;
&lt;br /&gt;
I2WD: Semantic Enhancement for DSGS-A: [[Distributed Development of a Shared Semantic Resource]] (2012-13)&lt;br /&gt;
&lt;br /&gt;
I2WD: [http://proceedings.spiedigitallibrary.org/proceeding.aspx?articleid=2298761 PED Fusion via Enterprise Ontology]&lt;br /&gt;
&lt;br /&gt;
[http://ncor.buffalo.edu/ontologies/AIRS_Ontologies.pdf Common Core Ontologies (preliminary statement)]&lt;br /&gt;
&lt;br /&gt;
[[Joint Doctrine Ontology]]&lt;br /&gt;
&lt;br /&gt;
[http://ncorwiki.buffalo.edu/index.php/Ontology_and_the_Navy_SYSCOMs_Systems_Engineering_Transformation_Process Ontology for Navy Systems Engineering]&lt;br /&gt;
&lt;br /&gt;
== Ontology of Planning ==&lt;br /&gt;
&lt;br /&gt;
[[Ontology of Planning]]&lt;br /&gt;
&lt;br /&gt;
== Ontology of Engineering ==&lt;br /&gt;
&lt;br /&gt;
[[BFO-Based Engineering Ontologies]]&lt;br /&gt;
&lt;br /&gt;
[https://s3.amazonaws.com/ontologforum/OntologySummit2016/2016-03-17_Engineering/Reference-Ontology-for-Manufacturing--BobYoung_20160317.pdf Bob Young: Towards a Reference Ontology for Manufacturing] (2016)&lt;br /&gt;
&lt;br /&gt;
[http://www.tandfonline.com/eprint/sUe6G9RNtb7tgjQtgtkC/full Interoperable Manufacturing Knowledge Systems] (2017)&lt;br /&gt;
&lt;br /&gt;
[[Ontology of Engineering]]&lt;br /&gt;
&lt;br /&gt;
[http://ncorwiki.buffalo.edu/index.php/Ontology_and_the_Navy_SYSCOMs_Systems_Engineering_Transformation_Process Ontology for Navy Systems Engineering]&lt;br /&gt;
&lt;br /&gt;
[[Product Life Cycle Ontologies]]&lt;br /&gt;
&lt;br /&gt;
[[Modeling and Simulation]]&lt;br /&gt;
&lt;br /&gt;
[[Systems Engineering Bootcamp]]&lt;br /&gt;
&lt;br /&gt;
== Materials Ontology ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[http://datascience.codata.org/articles/abstract/10.2481/dsj.5.52/ Toshihiro Ashino and Mitsutane Fujita: Definition of a Web Ontology for Design-Oriented Material Selection] (2006)&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
:[http://ontorule-project.eu/resources/steel.html Steel Industry Ontology] / [http://ontorule-project.eu/resources/steel.owl .owl file]&lt;br /&gt;
&lt;br /&gt;
:[http://ceur-ws.org/Vol-886/paper_1.pdf A Systematic Approach to Developing Ontologies for Manufacturing Service Modeling]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
[[Buffalo Engineering Ontology]]&lt;br /&gt;
&lt;br /&gt;
== Ontology for Clinical and Translational Science ==&lt;br /&gt;
&lt;br /&gt;
[[Clinical and Translational Science Ontology Group]]&lt;br /&gt;
&lt;br /&gt;
[[Infectious Disease Ontology]]&lt;br /&gt;
&lt;br /&gt;
[http://ncorwiki.buffalo.edu/index.php/Immunology_Ontologies Immunology Ontologies]&lt;br /&gt;
&lt;br /&gt;
== Microbiome Ontology ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Ontology&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3535841/ Improved Gene Ontology Annotation for Biofilm Formation, Filamentous Growth, and Phenotypic Switching in Candida albicans]&lt;br /&gt;
&lt;br /&gt;
[http://www.oxfordscholarship.com/view/10.1093/acprof:oso/9780199382514.001.0001/acprof-9780199382514-chapter-7 What Biofilms Can Teach Us about Individuality]&lt;br /&gt;
&lt;br /&gt;
[https://ac.els-cdn.com/S1532046415000507/1-s2.0-S1532046415000507-main.pdf?_tid=ca8ad71a-c168-11e7-b687-00000aacb35f&amp;amp;acdnat=1509804327_c9962782780a7f2935bfc9140684d5c0 MorphoCol: An ontology-based knowledgebase for the characterisation of clinically significant bacterial colony morphologies]&lt;br /&gt;
&lt;br /&gt;
[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.1024.59&amp;amp;rep=rep1&amp;amp;type=pdf Designing an Ontology Tool for the Unification of Biofilms Data]&lt;br /&gt;
&lt;br /&gt;
[http://press.igsb.anl.gov/earthmicrobiome/protocols-and-standards/empo/ Eearth Microbiome Project Ontlogy EMPO]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The Human Microbiome&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[http://www.cell.com/trends/microbiology/fulltext/S0966-842X(14)00023-7 Functional and phylogenetic assembly of microbial communities in the human microbiome]&lt;br /&gt;
&lt;br /&gt;
[http://www.cmaj.ca/content/187/11/825.short#sec-2 The human microbiome], including as appendix: [http://www.cmaj.ca/content/suppl/2015/05/19/cmaj.141072.DC1/14-1072-1-at.pdf A microbiome glossary]&lt;br /&gt;
&lt;br /&gt;
[https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3426293/ Defining the Human Microbiome]&lt;br /&gt;
&lt;br /&gt;
[https://www.biorxiv.org/content/early/2017/08/16/176784 MicrobiomeDB: a systems biology platform for integrating, mining and analyzing microbiome experiments]&lt;br /&gt;
&lt;br /&gt;
[https://hmpdacc.org Human Microbiome Project]&lt;br /&gt;
&lt;br /&gt;
[http://muse.jhu.edu/article/564608/pdf Parts and Wholes: The Human Microbiome, Ecological Ontology, and the Challenges of Community] &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Microbiomes and the external environment&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[http://www.earthmicrobiome.org/ The Earth Microbiome]&lt;br /&gt;
:[http://www.earthmicrobiome.org/protocols-and-standards/empo/ Earth Microbiome Project Ontology:EMPO]&lt;br /&gt;
&lt;br /&gt;
[https://www.nature.com/nature/journal/vaop/ncurrent/full/nature24621.html A communal catalogue reveals Earth’s multiscale microbial diversity] &lt;br /&gt;
&lt;br /&gt;
[http://metasub.org/ MetaSUB: Metagenomics and Metadesign of Subways &amp;amp; Urban Biome]&lt;br /&gt;
&lt;br /&gt;
[https://www.ncbi.nlm.nih.gov/pubmed/24305737/ Tracking human sewage microbiome in a municipal wastewater treatment plant]&lt;br /&gt;
&lt;br /&gt;
http://metasub.org/&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Varia&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[http://www.frontiersinai.com/turingfiles/July/12.pdf#page=9 Collective bio-molecular processes: The hidden ontology of systems biology]&lt;br /&gt;
&lt;br /&gt;
[https://academic.oup.com/bib/article/doi/10.1093/bib/bbx120/4210288/A-review-of-methods-and-databases-for-metagenomic A review of methods and databases for metagenomic classification and assembly]&lt;br /&gt;
&lt;br /&gt;
== Suggested Reading ==&lt;br /&gt;
&lt;br /&gt;
[http://ontology.buffalo.edu/smith/articles/ontologies.htm Ontology: An Introduction]&lt;br /&gt;
&lt;br /&gt;
[http://www.nature.com/nbt/journal/v25/n11/pdf/nbt1346.pdf Coordinated Evolution of Biomedical Ontologies]&lt;br /&gt;
&lt;br /&gt;
[[Avoiding Perspective-Relative Silos]]&lt;br /&gt;
&lt;br /&gt;
[http://ceur-ws.org/Vol-555/paper5.pdf Universal Core Semantic Layer]&lt;br /&gt;
&lt;br /&gt;
== Training Videos  ==&lt;br /&gt;
&lt;br /&gt;
[http://ncorwiki.buffalo.edu/index.php/Ontology_for_Intelligence,_Defense_and_Security Ontology for Intelligence, Defense and Security]&lt;br /&gt;
&lt;br /&gt;
[http://www.youtube.com/watch?v=fB6BjF4lAQ4&amp;amp;feature=related A Repeatable Process for Ontology Development]&lt;br /&gt;
&lt;br /&gt;
[http://www.youtube.com/v/Z5o1SpPqNrA Avoiding Semantic Stovepipes: Five Ontological Principles for Interoperability]&lt;br /&gt;
&lt;br /&gt;
[http://www.youtube.com/watch?v=JkQG1_gsXtc War-Fighter Ontology]&lt;br /&gt;
&lt;br /&gt;
==Studying Ontology in Buffalo==&lt;br /&gt;
&lt;br /&gt;
[http://www.philosophy.buffalo.edu/graduate/areas_of_study/phd/ Areas of Study]&lt;br /&gt;
&lt;br /&gt;
[http://sciencecareers.sciencemag.org/career_magazine/previous_issues/articles/2011_02_11/caredit.a1100012 Careers in ontology]&lt;/div&gt;</summary>
		<author><name>Phismith</name></author>
	</entry>
	<entry>
		<id>https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75540</id>
		<title>Philosophy and Artificial Intelligence 2026</title>
		<link rel="alternate" type="text/html" href="https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75540"/>
		<updated>2026-02-23T16:00:45Z</updated>

		<summary type="html">&lt;p&gt;Phismith: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&#039;&#039;&#039;Philosophy and Artificial Intelligence 2026&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe and Barry Smith&lt;br /&gt;
&lt;br /&gt;
MAP, USI, Lugano, Spring 2026&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;DRAFT VERSION&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
==Draft Schedule==&lt;br /&gt;
&lt;br /&gt;
:Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction&lt;br /&gt;
:Tuesday May 5 (09:30-12:15) Transhumanism: The Ultimate Stage of Cartesianism&lt;br /&gt;
:Wednesday, May 6 (09:30 - 12:15) AI and the Theory of Dynamic and Complex Systems&lt;br /&gt;
:Thursday, May 7 20 (09:30 - 12:15) AI and History, Geography and Law&lt;br /&gt;
:Friday, May 8 (13:30 - 16:15) The Impossibility of Artificial General Intelligence &lt;br /&gt;
:Tuesday, May 12 (09:30 - 12:15) AI and Creativity&lt;br /&gt;
:Wednesday, May 13 (09:30 - 12:15) Artificial Intelligence and Human Intelligence; The Limits of AI and the Limits of Physics, Part 1&lt;br /&gt;
:Friday, May 15 (09:30-12:15) The Limits of AI and the Limits of Physics, Part 2&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Introduction&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Artificial Intelligence (AI) is the subfield of Computer Science devoted to developing programs that enable computers to display behavior that can (broadly) be characterized as intelligent. On the strong version, the ultimate goal of AI is to create what is called General Artificial Intelligence (AGI), by which is meant an artificial system that is as intelligent as a human being.&lt;br /&gt;
&lt;br /&gt;
Since its inception in the middle of the last century AI has enjoyed repeated cycles of enthusiasm and disappointment (AI summers and winters). Recent successes of ChatGPT and other Large Language Models (LLMs) have opened a new era of popularization of AI. For the first time, AI tools have been created which are immediately available to the wider population, who for the first time can have real hands-on experience of what AI can do.&lt;br /&gt;
&lt;br /&gt;
These developments in AI open up a series of questions such as:&lt;br /&gt;
&lt;br /&gt;
:Will the powers of AI continue to grow in the future, and if so will they ever reach the point where they can be said to have intelligence equivalent to or greater than that of a human being?&lt;br /&gt;
:Could we ever reach the point where we can accept the thesis that an AI system could have something like consciousness or sentience?&lt;br /&gt;
:Could we reach the point where an AI system could be said to behave ethically, or to have responsibility for its actions?&lt;br /&gt;
:Can quantum computers enable a stronger AI than what we have today?&lt;br /&gt;
:Can a computer have desires, a will, and emotions?&lt;br /&gt;
:Can a computer have responsibility for its behavior?&lt;br /&gt;
:Could a machine have something like a personal identity? Would I really survive if the contents of my brain were uploaded to the cloud?&lt;br /&gt;
&lt;br /&gt;
We will describe in detail how stochastic AI works, and consider these and a series of other questions at the borderlines of philosophy and AI. The class will close with presentations of papers on relevant topics given by students.&lt;br /&gt;
&lt;br /&gt;
Some of the material for this class is derived from our book&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Why Machines Will Never Rule the World: Artificial Intelligence without Fear&#039;&#039; (2nd edition, Routledge 2025).&lt;br /&gt;
and from the companion volume:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Symposium on Why Machines Will Never Rule the World&#039;&#039; — Guest editor, Janna Hastings, University of Zurich&lt;br /&gt;
which appeared as a special issue of the public access journal Cosmos + Taxis in early 2024.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Faculty&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe is the founder and CEO of Cognotekt, GmBH, an AI company based in Cologne specialised in the design and implementation of holistic AI solutions. He has 20 years experience in the AI field, 8 years as a management consultant and software architect. He has also worked as a physician and mathematician, and he views AI itself -- to the extent that it is not an elaborate hype -- as a branch of applied mathematics. CUrrently his primary focus is in the biomathematics of cancer.&lt;br /&gt;
&lt;br /&gt;
Barry Smith is one of the world&#039;s most widely cited philosophers. He has contributed primarily to the field of applied ontology, which means applying philosophical ideas derived from analytical metaphysics to the concrete practical problems which arise where attempts are made to compare or combine heterogeneous bodies of data.&lt;br /&gt;
&lt;br /&gt;
Grading&lt;br /&gt;
&lt;br /&gt;
Essay with presentation: 70%&lt;br /&gt;
Essay with no presentation: 85%&lt;br /&gt;
Presentation: 15%&lt;br /&gt;
Class Participation 15%&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Draft Schedule&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Program&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction==&lt;br /&gt;
Part 1: An introduction to the course: Impact of philosophy on AI; impact of AI on philosophy&lt;br /&gt;
&lt;br /&gt;
The types of AI&lt;br /&gt;
&lt;br /&gt;
Deterministic AI&lt;br /&gt;
Good old fashioned AI (GOFAI)&lt;br /&gt;
Basic stochastic AI&lt;br /&gt;
How regression works&lt;br /&gt;
Advanced stochastic AI&lt;br /&gt;
Neural networks and deep learning&lt;br /&gt;
Hybrid&lt;br /&gt;
Neurosymbolic AI&lt;br /&gt;
Background&lt;br /&gt;
Why machines will never rule the world, chapter 7 (chapter 8 of 2nd edition)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
Video&lt;br /&gt;
Why Machines Will Never Rule the World&lt;br /&gt;
Part 2: What are the essential marks of human intelligence?&lt;br /&gt;
&lt;br /&gt;
The classical psychological definitions of intelligence are:  &lt;br /&gt;
&lt;br /&gt;
A. the ability to adapt to new situations (applies both to humans and to animals) &lt;br /&gt;
B. a very general mental capability (possessed only by humans) that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience &lt;br /&gt;
Can a machine be intelligent in either of these senses?&lt;br /&gt;
&lt;br /&gt;
Slides on IQ tests&lt;br /&gt;
Readings:&lt;br /&gt;
&lt;br /&gt;
Linda S. Gottfredson. Mainstream Science on Intelligence. In: Intelligence 24 (1997), pp. 13–23.&lt;br /&gt;
Jobst Landgrebe and Barry Smith: There is no Artificial General Intelligence&lt;br /&gt;
Jobst Landgrebe: Deep reasoning, abstraction and planning&lt;br /&gt;
Background: Ersatz Definitions, Anthropomorphisms, and Pareidolia&lt;br /&gt;
&lt;br /&gt;
There&#039;s no &#039;I&#039; in &#039;AI&#039;, Steven Pemberton, Amsterdam, December 12, 2024&lt;br /&gt;
1. Esatz definitions: using words like &#039;thinks&#039; as in &#039;the machine is thinking&#039;, but with meanings quite different from those we use when talking about human beings. As when we define &#039;flying&#039; as moving through the air, and then jumping up and down and saying &amp;quot;look, I&#039;m flying!&amp;quot;&lt;br /&gt;
2. Pareidolia: a psychological phenomenon that causes people to see patterns, objects, or meaning in ambiguous or unrelated stimuli&lt;br /&gt;
3. If you can&#039;t spot irony, you&#039;re not intelligent&lt;br /&gt;
Tuesday February 18 (09:30-12:15) Limits and Dangers of AI?&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the technical fundamentals of AI: Methods, mathematics, usage as well&lt;br /&gt;
&lt;br /&gt;
2. Outlines the theory of complex systems documented in our book&lt;br /&gt;
&lt;br /&gt;
3. Shows why AI cannot model complex systems adequately and synoptically, and why they therefore cannot reach a level of intelligence equal to that of human beings.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Will AI Destroy Humanity? A Soho Forum Debate (Spoiler: Jobst won)&lt;br /&gt;
&lt;br /&gt;
R.V. Yampolskii, AI: Unexplainable, Unpredictable, Uncontrollable&lt;br /&gt;
&lt;br /&gt;
Arvind Narayanan and Sayash Kapoor, AI Snake Oil&lt;br /&gt;
&lt;br /&gt;
Arnold Schelsky, The Hype Book, especially Chapter 1.&lt;br /&gt;
&lt;br /&gt;
==Tuesday May 5 (09:30-12:15) Transhumanism: The Ultimate Stage of Cartesianism ==&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the full spectrum of transhumanism and its cultural origins.&lt;br /&gt;
&lt;br /&gt;
2. Debunk the feasibility of radically improving human beings via technology.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
TESCREALISM, or: why AI gods are so passionate about creating Artificial General Intelligence&lt;br /&gt;
Considering the existential risk of Artificial Superintelligence&lt;br /&gt;
Thursday, February 20 (9:30 - 12:15) Can a machine be conscious?&lt;br /&gt;
The machine will&lt;br /&gt;
&lt;br /&gt;
Computers cannot have a will, because computers don&#039;t give a damn. Therefore there can be no machine ethics&lt;br /&gt;
&lt;br /&gt;
The lack of the giving-a-damn-factor is taken by Yann LeCun as a reason to reject the idea that AI might pose an existential risk to humanity – an AI will have no desire for self-preservation “Almost half of CEOs fear A.I. could destroy humanity five to 10 years from now — but ‘A.I. godfather&#039; says an existential threat is ‘preposterously ridiculous’” Fortune, June 15, 2023. See also here.&lt;br /&gt;
Implications of the absence of a machine will:&lt;br /&gt;
&lt;br /&gt;
The problem of the singularity (when machines will take over from humans) will not arise&lt;br /&gt;
The idea of digital immortality will never be realized Slides&lt;br /&gt;
The idea that human beings are simulations can be rejected&lt;br /&gt;
There can be no AI ethics (only: ethics governing human beings when they use AI)&lt;br /&gt;
Fermi&#039;s paradox is solved&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Searle&#039;s Chinese Room Argument&lt;br /&gt;
Machines cannot have intentionality; they cannot have experiences which are about something.&lt;br /&gt;
&lt;br /&gt;
Searle: Minds, Brains, and Programs&lt;br /&gt;
&lt;br /&gt;
==	Wednesday, May 6 (09:30 - 12:15) AI and the Theory of Dynamic and Complex Systems==&lt;br /&gt;
&lt;br /&gt;
== Thursday, May 7 (09:30 - 12:15) AI and History, Geography and Law==&lt;br /&gt;
&lt;br /&gt;
== Friday, May 8 (13:30 - 16:15) The Impossibility of Artificial General Intelligence==&lt;br /&gt;
Personal knowledge&lt;br /&gt;
&lt;br /&gt;
Explicit, implicit, practical, personal and tacit knowledge&lt;br /&gt;
Video&lt;br /&gt;
Knowing how vs Knowing that&lt;br /&gt;
Personal knowledge and science&lt;br /&gt;
Creativity&lt;br /&gt;
Empathy&lt;br /&gt;
Entrepreneurship&lt;br /&gt;
Leadership and control (and ruling the world)&lt;br /&gt;
Complex Systems and Cognitive Science: Why the Replication Problem is here to stay&lt;br /&gt;
&lt;br /&gt;
The &#039;replication problem&#039; is the the inability of scientific communities to independently confirm the results of scientific work. Much has been written on this problem especially as it arises in (social) psychology, and on potential solutions under the heading of &#039;open science&#039;. But we will see that the replication problem has plagued medicine as a positive science since its beginnings (Virchov and Pasteur). This problem has become worse over the last 30 years and has massive consequences for healthcare practice and policy.&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
==Tuesday, May 12, 9:30 - 12:15)  ==&lt;br /&gt;
&lt;br /&gt;
==Wednesday, May 13 (13:30 - 16:30) Artificial Intelligence and Human Intelligence; The Limits of AI and the Limits of Physics, Part 1==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Personal knowledge&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/AI-and-Creativity Explicit, implicit, practical, personal and tacit knowledge]&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Practical-knowledge Video]&lt;br /&gt;
&lt;br /&gt;
:Knowing how vs Knowing that&lt;br /&gt;
:Personal knowledge and science&lt;br /&gt;
:Creativity&lt;br /&gt;
:Empathy&lt;br /&gt;
:Entrepreneurship&lt;br /&gt;
:Leadership and control (and ruling the world)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Complex Systems and Cognitive Science: Why the Replication Problem is here to stay&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:The &#039;replication problem&#039; is the the inability of scientific communities to independently confirm the results of scientific work. Much has been written on this problem especially as it arises in (social) psychology, and on potential solutions under the heading of &#039;open science&#039;. But we will see that the replication problem has plagued medicine as a positive science since its beginnings (Virchov and Pasteur). This problem has become worse over the last 30 years and has massive consequences for healthcare practice and policy. &lt;br /&gt;
&lt;br /&gt;
[https://buffalo.box.com/s/m3nu15lqjw0qhpqycz3wjsai057p9jf6 Slides]&lt;br /&gt;
&lt;br /&gt;
==Friday May 15 (09:30-12:15) The Limits of AI and the Limits of Physics, Part 2 ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/Living-in-a-Simulation Are we living in a simulation?]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Living-in-a-Simulation Video]&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/AI=and-the-Future The Future of Artificial Intelligence]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Repication:&lt;br /&gt;
:[https://plato.stanford.edu/entries/scientific-reproducibility Reproducibility of Scientific Results], &#039;&#039;Stanford Encyclopedia of Philosophy&#039;&#039;, 2018&lt;br /&gt;
:[https://www.vox.com/future-perfect/21504366/science-replication-crisis-peer-review-statistics Science has been in a “replication crisis” for a decade]&lt;br /&gt;
:[https://www.youtube.com/watch?v=HhDGkbw1FdwThe Irreproducibility Crisis and the Lehman Crash], Barry Smith, Youtube 2020&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:Room: A23&lt;br /&gt;
&lt;br /&gt;
:9:45 Julien Mommer, What is the Intelligence in &amp;quot;Artificial Intelligence&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
&#039;&#039;&#039;ChatGPT and its Future&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The Indispensability of Human Creativity&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Capabilities: The Interesting Version of the Story&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;Student Presentations&#039;&#039;&#039;&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Background Material==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to AI for Philosophers&#039;&#039;&#039;&lt;br /&gt;
:[https://www.youtube.com/watch?v=cmiY8_XVvzs Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Why-not-robot-cops Slides]&lt;br /&gt;
(AI experts are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to Philosophy for Computer Scientists&#039;&#039;&#039;&lt;br /&gt;
:[https://buffalo.box.com/v/What-is-philosophy Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Crash-Course-Introduction Slides]&lt;br /&gt;
(Philosophers are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[https://www.cp.eng.chula.ac.th/~prabhas/teaching/cbs-it-seminar/2012/aiphil-mccarthy.pdf John McCarthy, &amp;quot;What has AI in common with philosophy?&amp;quot;]&lt;/div&gt;</summary>
		<author><name>Phismith</name></author>
	</entry>
	<entry>
		<id>https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75539</id>
		<title>Philosophy and Artificial Intelligence 2026</title>
		<link rel="alternate" type="text/html" href="https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75539"/>
		<updated>2026-02-20T21:51:07Z</updated>

		<summary type="html">&lt;p&gt;Phismith: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&#039;&#039;&#039;Philosophy and Artificial Intelligence 2026&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe and Barry Smith&lt;br /&gt;
&lt;br /&gt;
MAP, USI, Lugano, Spring 2026&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;DRAFT VERSION&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
:Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction&lt;br /&gt;
:Tuesday May 5 (09:30-12:15) Transhumanism: The Ultimate Stage of Cartesianism&lt;br /&gt;
:Wednesday, May 6 (09:30 - 12:15) AI and the Theory of Dynamic and Complex Systems&lt;br /&gt;
:Thursday, May 7 20 (09:30 - 12:15) AI and History, Geography and Law&lt;br /&gt;
:Friday, May 8 (13:30 - 16:15) The Impossibility of Artificial General Intelligence &lt;br /&gt;
:Tuesday, May 12 (09:30 - 12:15) AI and Creativity&lt;br /&gt;
:Wednesday, May 13 (09:30 - 12:15) Artificial Intelligence and Human Intelligence; The Limits of AI and the Limits of Physics, Part 1&lt;br /&gt;
:Friday, May 15 (09:30-12:15) The Limits of AI and the Limits of Physics, Part 2&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Introduction&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Artificial Intelligence (AI) is the subfield of Computer Science devoted to developing programs that enable computers to display behavior that can (broadly) be characterized as intelligent. On the strong version, the ultimate goal of AI is to create what is called General Artificial Intelligence (AGI), by which is meant an artificial system that is as intelligent as a human being.&lt;br /&gt;
&lt;br /&gt;
Since its inception in the middle of the last century AI has enjoyed repeated cycles of enthusiasm and disappointment (AI summers and winters). Recent successes of ChatGPT and other Large Language Models (LLMs) have opened a new era of popularization of AI. For the first time, AI tools have been created which are immediately available to the wider population, who for the first time can have real hands-on experience of what AI can do.&lt;br /&gt;
&lt;br /&gt;
These developments in AI open up a series of questions such as:&lt;br /&gt;
&lt;br /&gt;
:Will the powers of AI continue to grow in the future, and if so will they ever reach the point where they can be said to have intelligence equivalent to or greater than that of a human being?&lt;br /&gt;
:Could we ever reach the point where we can accept the thesis that an AI system could have something like consciousness or sentience?&lt;br /&gt;
:Could we reach the point where an AI system could be said to behave ethically, or to have responsibility for its actions?&lt;br /&gt;
:Can quantum computers enable a stronger AI than what we have today?&lt;br /&gt;
:Can a computer have desires, a will, and emotions?&lt;br /&gt;
:Can a computer have responsibility for its behavior?&lt;br /&gt;
:Could a machine have something like a personal identity? Would I really survive if the contents of my brain were uploaded to the cloud?&lt;br /&gt;
&lt;br /&gt;
We will describe in detail how stochastic AI works, and consider these and a series of other questions at the borderlines of philosophy and AI. The class will close with presentations of papers on relevant topics given by students.&lt;br /&gt;
&lt;br /&gt;
Some of the material for this class is derived from our book&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Why Machines Will Never Rule the World: Artificial Intelligence without Fear&#039;&#039; (2nd edition, Routledge 2025).&lt;br /&gt;
and from the companion volume:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Symposium on Why Machines Will Never Rule the World&#039;&#039; — Guest editor, Janna Hastings, University of Zurich&lt;br /&gt;
which appeared as a special issue of the public access journal Cosmos + Taxis in early 2024.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Faculty&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe is the founder and CEO of Cognotekt, GmBH, an AI company based in Cologne specialised in the design and implementation of holistic AI solutions. He has 20 years experience in the AI field, 8 years as a management consultant and software architect. He has also worked as a physician and mathematician, and he views AI itself -- to the extent that it is not an elaborate hype -- as a branch of applied mathematics. CUrrently his primary focus is in the biomathematics of cancer.&lt;br /&gt;
&lt;br /&gt;
Barry Smith is one of the world&#039;s most widely cited philosophers. He has contributed primarily to the field of applied ontology, which means applying philosophical ideas derived from analytical metaphysics to the concrete practical problems which arise where attempts are made to compare or combine heterogeneous bodies of data.&lt;br /&gt;
&lt;br /&gt;
Grading&lt;br /&gt;
&lt;br /&gt;
Essay with presentation: 70%&lt;br /&gt;
Essay with no presentation: 85%&lt;br /&gt;
Presentation: 15%&lt;br /&gt;
Class Participation 15%&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Draft Schedule&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Program&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction==&lt;br /&gt;
Part 1: An introduction to the course: Impact of philosophy on AI; impact of AI on philosophy&lt;br /&gt;
&lt;br /&gt;
The types of AI&lt;br /&gt;
&lt;br /&gt;
Deterministic AI&lt;br /&gt;
Good old fashioned AI (GOFAI)&lt;br /&gt;
Basic stochastic AI&lt;br /&gt;
How regression works&lt;br /&gt;
Advanced stochastic AI&lt;br /&gt;
Neural networks and deep learning&lt;br /&gt;
Hybrid&lt;br /&gt;
Neurosymbolic AI&lt;br /&gt;
Background&lt;br /&gt;
Why machines will never rule the world, chapter 7 (chapter 8 of 2nd edition)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
Video&lt;br /&gt;
Why Machines Will Never Rule the World&lt;br /&gt;
Part 2: What are the essential marks of human intelligence?&lt;br /&gt;
&lt;br /&gt;
The classical psychological definitions of intelligence are:  &lt;br /&gt;
&lt;br /&gt;
A. the ability to adapt to new situations (applies both to humans and to animals) &lt;br /&gt;
B. a very general mental capability (possessed only by humans) that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience &lt;br /&gt;
Can a machine be intelligent in either of these senses?&lt;br /&gt;
&lt;br /&gt;
Slides on IQ tests&lt;br /&gt;
Readings:&lt;br /&gt;
&lt;br /&gt;
Linda S. Gottfredson. Mainstream Science on Intelligence. In: Intelligence 24 (1997), pp. 13–23.&lt;br /&gt;
Jobst Landgrebe and Barry Smith: There is no Artificial General Intelligence&lt;br /&gt;
Jobst Landgrebe: Deep reasoning, abstraction and planning&lt;br /&gt;
Background: Ersatz Definitions, Anthropomorphisms, and Pareidolia&lt;br /&gt;
&lt;br /&gt;
There&#039;s no &#039;I&#039; in &#039;AI&#039;, Steven Pemberton, Amsterdam, December 12, 2024&lt;br /&gt;
1. Esatz definitions: using words like &#039;thinks&#039; as in &#039;the machine is thinking&#039;, but with meanings quite different from those we use when talking about human beings. As when we define &#039;flying&#039; as moving through the air, and then jumping up and down and saying &amp;quot;look, I&#039;m flying!&amp;quot;&lt;br /&gt;
2. Pareidolia: a psychological phenomenon that causes people to see patterns, objects, or meaning in ambiguous or unrelated stimuli&lt;br /&gt;
3. If you can&#039;t spot irony, you&#039;re not intelligent&lt;br /&gt;
Tuesday February 18 (09:30-12:15) Limits and Dangers of AI?&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the technical fundamentals of AI: Methods, mathematics, usage as well&lt;br /&gt;
&lt;br /&gt;
2. Outlines the theory of complex systems documented in our book&lt;br /&gt;
&lt;br /&gt;
3. Shows why AI cannot model complex systems adequately and synoptically, and why they therefore cannot reach a level of intelligence equal to that of human beings.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Will AI Destroy Humanity? A Soho Forum Debate (Spoiler: Jobst won)&lt;br /&gt;
&lt;br /&gt;
R.V. Yampolskii, AI: Unexplainable, Unpredictable, Uncontrollable&lt;br /&gt;
&lt;br /&gt;
Arvind Narayanan and Sayash Kapoor, AI Snake Oil&lt;br /&gt;
&lt;br /&gt;
Arnold Schelsky, The Hype Book, especially Chapter 1.&lt;br /&gt;
&lt;br /&gt;
==Tuesday May 5 (09:30-12:15) Transhumanism: The Ultimate Stage of Cartesianism ==&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the full spectrum of transhumanism and its cultural origins.&lt;br /&gt;
&lt;br /&gt;
2. Debunk the feasibility of radically improving human beings via technology.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
TESCREALISM, or: why AI gods are so passionate about creating Artificial General Intelligence&lt;br /&gt;
Considering the existential risk of Artificial Superintelligence&lt;br /&gt;
Thursday, February 20 (9:30 - 12:15) Can a machine be conscious?&lt;br /&gt;
The machine will&lt;br /&gt;
&lt;br /&gt;
Computers cannot have a will, because computers don&#039;t give a damn. Therefore there can be no machine ethics&lt;br /&gt;
&lt;br /&gt;
The lack of the giving-a-damn-factor is taken by Yann LeCun as a reason to reject the idea that AI might pose an existential risk to humanity – an AI will have no desire for self-preservation “Almost half of CEOs fear A.I. could destroy humanity five to 10 years from now — but ‘A.I. godfather&#039; says an existential threat is ‘preposterously ridiculous’” Fortune, June 15, 2023. See also here.&lt;br /&gt;
Implications of the absence of a machine will:&lt;br /&gt;
&lt;br /&gt;
The problem of the singularity (when machines will take over from humans) will not arise&lt;br /&gt;
The idea of digital immortality will never be realized Slides&lt;br /&gt;
The idea that human beings are simulations can be rejected&lt;br /&gt;
There can be no AI ethics (only: ethics governing human beings when they use AI)&lt;br /&gt;
Fermi&#039;s paradox is solved&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Searle&#039;s Chinese Room Argument&lt;br /&gt;
Machines cannot have intentionality; they cannot have experiences which are about something.&lt;br /&gt;
&lt;br /&gt;
Searle: Minds, Brains, and Programs&lt;br /&gt;
&lt;br /&gt;
==	Wednesday, May 6 (09:30 - 12:15) AI and the Theory of Dynamic and Complex Systems==&lt;br /&gt;
&lt;br /&gt;
== Thursday, May 7 (09:30 - 12:15) AI and History, Geography and Law==&lt;br /&gt;
&lt;br /&gt;
== Friday, May 8 (13:30 - 16:15) The Impossibility of Artificial General Intelligence==&lt;br /&gt;
Personal knowledge&lt;br /&gt;
&lt;br /&gt;
Explicit, implicit, practical, personal and tacit knowledge&lt;br /&gt;
Video&lt;br /&gt;
Knowing how vs Knowing that&lt;br /&gt;
Personal knowledge and science&lt;br /&gt;
Creativity&lt;br /&gt;
Empathy&lt;br /&gt;
Entrepreneurship&lt;br /&gt;
Leadership and control (and ruling the world)&lt;br /&gt;
Complex Systems and Cognitive Science: Why the Replication Problem is here to stay&lt;br /&gt;
&lt;br /&gt;
The &#039;replication problem&#039; is the the inability of scientific communities to independently confirm the results of scientific work. Much has been written on this problem especially as it arises in (social) psychology, and on potential solutions under the heading of &#039;open science&#039;. But we will see that the replication problem has plagued medicine as a positive science since its beginnings (Virchov and Pasteur). This problem has become worse over the last 30 years and has massive consequences for healthcare practice and policy.&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
==Tuesday, May 12, 9:30 - 12:15)  ==&lt;br /&gt;
&lt;br /&gt;
==Wednesday, May 13 (13:30 - 16:30) Artificial Intelligence and Human Intelligence; The Limits of AI and the Limits of Physics, Part 1==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Personal knowledge&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/AI-and-Creativity Explicit, implicit, practical, personal and tacit knowledge]&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Practical-knowledge Video]&lt;br /&gt;
&lt;br /&gt;
:Knowing how vs Knowing that&lt;br /&gt;
:Personal knowledge and science&lt;br /&gt;
:Creativity&lt;br /&gt;
:Empathy&lt;br /&gt;
:Entrepreneurship&lt;br /&gt;
:Leadership and control (and ruling the world)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Complex Systems and Cognitive Science: Why the Replication Problem is here to stay&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:The &#039;replication problem&#039; is the the inability of scientific communities to independently confirm the results of scientific work. Much has been written on this problem especially as it arises in (social) psychology, and on potential solutions under the heading of &#039;open science&#039;. But we will see that the replication problem has plagued medicine as a positive science since its beginnings (Virchov and Pasteur). This problem has become worse over the last 30 years and has massive consequences for healthcare practice and policy. &lt;br /&gt;
&lt;br /&gt;
[https://buffalo.box.com/s/m3nu15lqjw0qhpqycz3wjsai057p9jf6 Slides]&lt;br /&gt;
&lt;br /&gt;
==Friday May 15 (09:30-12:15) The Limits of AI and the Limits of Physics, Part 2 ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/Living-in-a-Simulation Are we living in a simulation?]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Living-in-a-Simulation Video]&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/AI=and-the-Future The Future of Artificial Intelligence]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Repication:&lt;br /&gt;
:[https://plato.stanford.edu/entries/scientific-reproducibility Reproducibility of Scientific Results], &#039;&#039;Stanford Encyclopedia of Philosophy&#039;&#039;, 2018&lt;br /&gt;
:[https://www.vox.com/future-perfect/21504366/science-replication-crisis-peer-review-statistics Science has been in a “replication crisis” for a decade]&lt;br /&gt;
:[https://www.youtube.com/watch?v=HhDGkbw1FdwThe Irreproducibility Crisis and the Lehman Crash], Barry Smith, Youtube 2020&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:Room: A23&lt;br /&gt;
&lt;br /&gt;
:9:45 Julien Mommer, What is the Intelligence in &amp;quot;Artificial Intelligence&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
&#039;&#039;&#039;ChatGPT and its Future&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The Indispensability of Human Creativity&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Capabilities: The Interesting Version of the Story&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;Student Presentations&#039;&#039;&#039;&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Background Material==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to AI for Philosophers&#039;&#039;&#039;&lt;br /&gt;
:[https://www.youtube.com/watch?v=cmiY8_XVvzs Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Why-not-robot-cops Slides]&lt;br /&gt;
(AI experts are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to Philosophy for Computer Scientists&#039;&#039;&#039;&lt;br /&gt;
:[https://buffalo.box.com/v/What-is-philosophy Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Crash-Course-Introduction Slides]&lt;br /&gt;
(Philosophers are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[https://www.cp.eng.chula.ac.th/~prabhas/teaching/cbs-it-seminar/2012/aiphil-mccarthy.pdf John McCarthy, &amp;quot;What has AI in common with philosophy?&amp;quot;]&lt;/div&gt;</summary>
		<author><name>Phismith</name></author>
	</entry>
	<entry>
		<id>https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75538</id>
		<title>Philosophy and Artificial Intelligence 2026</title>
		<link rel="alternate" type="text/html" href="https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75538"/>
		<updated>2026-02-20T12:00:12Z</updated>

		<summary type="html">&lt;p&gt;Phismith: th&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&#039;&#039;&#039;Philosophy and Artificial Intelligence 2026&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe and Barry Smith&lt;br /&gt;
&lt;br /&gt;
MAP, USI, Lugano, Spring 2026&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;DRAFT VERSION&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
:Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction&lt;br /&gt;
:Tuesday May 5 (09:30-12:15) Transhumanism: The Ultimate Stage of Cartesianism&lt;br /&gt;
:Wednesday, May 6 (09:30 - 12:15) AI and the Theory of Dynamic and Complex Systems&lt;br /&gt;
:Thursday, May 7 20 (09:30 - 12:15) AI and History, Geography and Law&lt;br /&gt;
:Friday, May 8 (13:30 - 16:15) The Impossibility of Artificial General Intelligence &lt;br /&gt;
:Tuesday, May 12 (09:30 - 12:15) AI and Creativity&lt;br /&gt;
:Wednesday, May 13 (09:30 - 12:15) Artificial Intelligence and Human Intelligence; The Limits of AI and the Limits of Physics, Part 1&lt;br /&gt;
:Friday, May 15 (09:30-12:15) The Limits of AI and the Limits of Physics, Part 2&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Introduction&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Artificial Intelligence (AI) is the subfield of Computer Science devoted to developing programs that enable computers to display behavior that can (broadly) be characterized as intelligent. On the strong version, the ultimate goal of AI is to create what is called General Artificial Intelligence (AGI), by which is meant an artificial system that is as intelligent as a human being.&lt;br /&gt;
&lt;br /&gt;
Since its inception in the middle of the last century AI has enjoyed repeated cycles of enthusiasm and disappointment (AI summers and winters). Recent successes of ChatGPT and other Large Language Models (LLMs) have opened a new era of popularization of AI. For the first time, AI tools have been created which are immediately available to the wider population, who for the first time can have real hands-on experience of what AI can do.&lt;br /&gt;
&lt;br /&gt;
These developments in AI open up a series of questions such as:&lt;br /&gt;
&lt;br /&gt;
:Will the powers of AI continue to grow in the future, and if so will they ever reach the point where they can be said to have intelligence equivalent to or greater than that of a human being?&lt;br /&gt;
:Could we ever reach the point where we can accept the thesis that an AI system could have something like consciousness or sentience?&lt;br /&gt;
:Could we reach the point where an AI system could be said to behave ethically, or to have responsibility for its actions?&lt;br /&gt;
:Can quantum computers enable a stronger AI than what we have today?&lt;br /&gt;
:Can a computer have desires, a will, and emotions?&lt;br /&gt;
:Can a computer have responsibility for its behavior?&lt;br /&gt;
:Could a machine have something like a personal identity? Would I really survive if the contents of my brain were uploaded to the cloud?&lt;br /&gt;
&lt;br /&gt;
We will describe in detail how stochastic AI works, and consider these and a series of other questions at the borderlines of philosophy and AI. The class will close with presentations of papers on relevant topics given by students.&lt;br /&gt;
&lt;br /&gt;
Some of the material for this class is derived from our book&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Why Machines Will Never Rule the World: Artificial Intelligence without Fear&#039;&#039; (2nd edition, Routledge 2025).&lt;br /&gt;
and from the companion volume:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Symposium on Why Machines Will Never Rule the World&#039;&#039; — Guest editor, Janna Hastings, University of Zurich&lt;br /&gt;
which appeared as a special issue of the public access journal Cosmos + Taxis in early 2024.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Faculty&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe is the founder and CEO of Cognotekt, GmBH, an AI company based in Cologne specialised in the design and implementation of holistic AI solutions. He has 20 years experience in the AI field, 8 years as a management consultant and software architect. He has also worked as a physician and mathematician, and he views AI itself -- to the extent that it is not an elaborate hype -- as a branch of applied mathematics. CUrrently his primary focus is in the biomathematics of cancer.&lt;br /&gt;
&lt;br /&gt;
Barry Smith is one of the world&#039;s most widely cited philosophers. He has contributed primarily to the field of applied ontology, which means applying philosophical ideas derived from analytical metaphysics to the concrete practical problems which arise where attempts are made to compare or combine heterogeneous bodies of data.&lt;br /&gt;
&lt;br /&gt;
Grading&lt;br /&gt;
&lt;br /&gt;
Essay with presentation: 70%&lt;br /&gt;
Essay with no presentation: 85%&lt;br /&gt;
Presentation: 15%&lt;br /&gt;
Class Participation 15%&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Draft Schedule&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Program&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==1. Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction==&lt;br /&gt;
Part 1: An introduction to the course: Impact of philosophy on AI; impact of AI on philosophy&lt;br /&gt;
&lt;br /&gt;
The types of AI&lt;br /&gt;
&lt;br /&gt;
Deterministic AI&lt;br /&gt;
Good old fashioned AI (GOFAI)&lt;br /&gt;
Basic stochastic AI&lt;br /&gt;
How regression works&lt;br /&gt;
Advanced stochastic AI&lt;br /&gt;
Neural networks and deep learning&lt;br /&gt;
Hybrid&lt;br /&gt;
Neurosymbolic AI&lt;br /&gt;
Background&lt;br /&gt;
Why machines will never rule the world, chapter 7 (chapter 8 of 2nd edition)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
Video&lt;br /&gt;
Why Machines Will Never Rule the World&lt;br /&gt;
Part 2: What are the essential marks of human intelligence?&lt;br /&gt;
&lt;br /&gt;
The classical psychological definitions of intelligence are:  &lt;br /&gt;
&lt;br /&gt;
A. the ability to adapt to new situations (applies both to humans and to animals) &lt;br /&gt;
B. a very general mental capability (possessed only by humans) that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience &lt;br /&gt;
Can a machine be intelligent in either of these senses?&lt;br /&gt;
&lt;br /&gt;
Slides on IQ tests&lt;br /&gt;
Readings:&lt;br /&gt;
&lt;br /&gt;
Linda S. Gottfredson. Mainstream Science on Intelligence. In: Intelligence 24 (1997), pp. 13–23.&lt;br /&gt;
Jobst Landgrebe and Barry Smith: There is no Artificial General Intelligence&lt;br /&gt;
Jobst Landgrebe: Deep reasoning, abstraction and planning&lt;br /&gt;
Background: Ersatz Definitions, Anthropomorphisms, and Pareidolia&lt;br /&gt;
&lt;br /&gt;
There&#039;s no &#039;I&#039; in &#039;AI&#039;, Steven Pemberton, Amsterdam, December 12, 2024&lt;br /&gt;
1. Esatz definitions: using words like &#039;thinks&#039; as in &#039;the machine is thinking&#039;, but with meanings quite different from those we use when talking about human beings. As when we define &#039;flying&#039; as moving through the air, and then jumping up and down and saying &amp;quot;look, I&#039;m flying!&amp;quot;&lt;br /&gt;
2. Pareidolia: a psychological phenomenon that causes people to see patterns, objects, or meaning in ambiguous or unrelated stimuli&lt;br /&gt;
3. If you can&#039;t spot irony, you&#039;re not intelligent&lt;br /&gt;
Tuesday February 18 (09:30-12:15) Limits and Dangers of AI?&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the technical fundamentals of AI: Methods, mathematics, usage as well&lt;br /&gt;
&lt;br /&gt;
2. Outlines the theory of complex systems documented in our book&lt;br /&gt;
&lt;br /&gt;
3. Shows why AI cannot model complex systems adequately and synoptically, and why they therefore cannot reach a level of intelligence equal to that of human beings.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Will AI Destroy Humanity? A Soho Forum Debate (Spoiler: Jobst won)&lt;br /&gt;
&lt;br /&gt;
R.V. Yampolskii, AI: Unexplainable, Unpredictable, Uncontrollable&lt;br /&gt;
&lt;br /&gt;
Arvind Narayanan and Sayash Kapoor, AI Snake Oil&lt;br /&gt;
&lt;br /&gt;
Arnold Schelsky, The Hype Book, especially Chapter 1.&lt;br /&gt;
&lt;br /&gt;
==2	Tuesday May 5 (09:30-12:15) Transhumanism: The Ultimate Stage of Cartesianism ==&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the full spectrum of transhumanism and its cultural origins.&lt;br /&gt;
&lt;br /&gt;
2. Debunk the feasibility of radically improving human beings via technology.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
TESCREALISM, or: why AI gods are so passionate about creating Artificial General Intelligence&lt;br /&gt;
Considering the existential risk of Artificial Superintelligence&lt;br /&gt;
Thursday, February 20 (9:30 - 12:15) Can a machine be conscious?&lt;br /&gt;
The machine will&lt;br /&gt;
&lt;br /&gt;
Computers cannot have a will, because computers don&#039;t give a damn. Therefore there can be no machine ethics&lt;br /&gt;
&lt;br /&gt;
The lack of the giving-a-damn-factor is taken by Yann LeCun as a reason to reject the idea that AI might pose an existential risk to humanity – an AI will have no desire for self-preservation “Almost half of CEOs fear A.I. could destroy humanity five to 10 years from now — but ‘A.I. godfather&#039; says an existential threat is ‘preposterously ridiculous’” Fortune, June 15, 2023. See also here.&lt;br /&gt;
Implications of the absence of a machine will:&lt;br /&gt;
&lt;br /&gt;
The problem of the singularity (when machines will take over from humans) will not arise&lt;br /&gt;
The idea of digital immortality will never be realized Slides&lt;br /&gt;
The idea that human beings are simulations can be rejected&lt;br /&gt;
There can be no AI ethics (only: ethics governing human beings when they use AI)&lt;br /&gt;
Fermi&#039;s paradox is solved&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Searle&#039;s Chinese Room Argument&lt;br /&gt;
Machines cannot have intentionality; they cannot have experiences which are about something.&lt;br /&gt;
&lt;br /&gt;
Searle: Minds, Brains, and Programs&lt;br /&gt;
&lt;br /&gt;
==3.	Wednesday, May 6 (09:30 - 12:15) AI and the Theory of Dynamic and Complex Systems==&lt;br /&gt;
&lt;br /&gt;
==4. Thursday, May 7 (09:30 - 12:15) AI and History, Geography and Law==&lt;br /&gt;
&lt;br /&gt;
==5.	Friday, May 8 (13:30 - 16:15) The Impossibility of Artificial General Intelligence==&lt;br /&gt;
Personal knowledge&lt;br /&gt;
&lt;br /&gt;
Explicit, implicit, practical, personal and tacit knowledge&lt;br /&gt;
Video&lt;br /&gt;
Knowing how vs Knowing that&lt;br /&gt;
Personal knowledge and science&lt;br /&gt;
Creativity&lt;br /&gt;
Empathy&lt;br /&gt;
Entrepreneurship&lt;br /&gt;
Leadership and control (and ruling the world)&lt;br /&gt;
Complex Systems and Cognitive Science: Why the Replication Problem is here to stay&lt;br /&gt;
&lt;br /&gt;
The &#039;replication problem&#039; is the the inability of scientific communities to independently confirm the results of scientific work. Much has been written on this problem especially as it arises in (social) psychology, and on potential solutions under the heading of &#039;open science&#039;. But we will see that the replication problem has plagued medicine as a positive science since its beginnings (Virchov and Pasteur). This problem has become worse over the last 30 years and has massive consequences for healthcare practice and policy.&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
==6. Tuesday, May 12, 9:30 - 12:15)  ==&lt;br /&gt;
&lt;br /&gt;
==Wednesday, May 13 (13:30 - 16:30) Artificial Intelligence and Human Intelligence; The Limits of AI and the Limits of Physics, Part 1==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Personal knowledge&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/AI-and-Creativity Explicit, implicit, practical, personal and tacit knowledge]&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Practical-knowledge Video]&lt;br /&gt;
&lt;br /&gt;
:Knowing how vs Knowing that&lt;br /&gt;
:Personal knowledge and science&lt;br /&gt;
:Creativity&lt;br /&gt;
:Empathy&lt;br /&gt;
:Entrepreneurship&lt;br /&gt;
:Leadership and control (and ruling the world)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Complex Systems and Cognitive Science: Why the Replication Problem is here to stay&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:The &#039;replication problem&#039; is the the inability of scientific communities to independently confirm the results of scientific work. Much has been written on this problem especially as it arises in (social) psychology, and on potential solutions under the heading of &#039;open science&#039;. But we will see that the replication problem has plagued medicine as a positive science since its beginnings (Virchov and Pasteur). This problem has become worse over the last 30 years and has massive consequences for healthcare practice and policy. &lt;br /&gt;
&lt;br /&gt;
[https://buffalo.box.com/s/m3nu15lqjw0qhpqycz3wjsai057p9jf6 Slides]&lt;br /&gt;
&lt;br /&gt;
==Friday May 15 (09:30-12:15) The Limits of AI and the Limits of Physics, Part 2 ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/Living-in-a-Simulation Are we living in a simulation?]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Living-in-a-Simulation Video]&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/AI=and-the-Future The Future of Artificial Intelligence]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Repication:&lt;br /&gt;
:[https://plato.stanford.edu/entries/scientific-reproducibility Reproducibility of Scientific Results], &#039;&#039;Stanford Encyclopedia of Philosophy&#039;&#039;, 2018&lt;br /&gt;
:[https://www.vox.com/future-perfect/21504366/science-replication-crisis-peer-review-statistics Science has been in a “replication crisis” for a decade]&lt;br /&gt;
:[https://www.youtube.com/watch?v=HhDGkbw1FdwThe Irreproducibility Crisis and the Lehman Crash], Barry Smith, Youtube 2020&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:Room: A23&lt;br /&gt;
&lt;br /&gt;
:9:45 Julien Mommer, What is the Intelligence in &amp;quot;Artificial Intelligence&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
&#039;&#039;&#039;ChatGPT and its Future&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The Indispensability of Human Creativity&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Capabilities: The Interesting Version of the Story&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;Student Presentations&#039;&#039;&#039;&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Background Material==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to AI for Philosophers&#039;&#039;&#039;&lt;br /&gt;
:[https://www.youtube.com/watch?v=cmiY8_XVvzs Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Why-not-robot-cops Slides]&lt;br /&gt;
(AI experts are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to Philosophy for Computer Scientists&#039;&#039;&#039;&lt;br /&gt;
:[https://buffalo.box.com/v/What-is-philosophy Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Crash-Course-Introduction Slides]&lt;br /&gt;
(Philosophers are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[https://www.cp.eng.chula.ac.th/~prabhas/teaching/cbs-it-seminar/2012/aiphil-mccarthy.pdf John McCarthy, &amp;quot;What has AI in common with philosophy?&amp;quot;]&lt;/div&gt;</summary>
		<author><name>Phismith</name></author>
	</entry>
	<entry>
		<id>https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75537</id>
		<title>Philosophy and Artificial Intelligence 2026</title>
		<link rel="alternate" type="text/html" href="https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75537"/>
		<updated>2026-02-20T11:57:42Z</updated>

		<summary type="html">&lt;p&gt;Phismith: /* Wednesday April 30 (13:30 - 16:30) Explicit, implicit, practical, personal and tacit knowledge */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&#039;&#039;&#039;Philosophy and Artificial Intelligence 2026&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe and Barry Smith&lt;br /&gt;
&lt;br /&gt;
MAP, USI, Lugano, Spring 2026&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;DRAFT VERSION&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
:Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction&lt;br /&gt;
:Tuesday May 5 (09:30-12:15) Transhumanism: The Ultimate Stage of Cartesianism&lt;br /&gt;
:Wednesday, May 6 (09:30 - 12:15) AI and the Theory of Dynamic and Complex Systems&lt;br /&gt;
:Thursday, May 7 20 (09:30 - 12:15) AI and History, Geography and Law&lt;br /&gt;
:Friday, May 8 (13:30 - 16:15) The Impossibility of Artificial General Intelligence &lt;br /&gt;
:Tuesday, May 12 (09:30 - 12:15) AI and Creativity&lt;br /&gt;
:Wednesday, May 13 (09:30 - 12:15) Artificial Intelligence and Human Intelligence; The Limits of AI and the Limits of Physics, Part 1&lt;br /&gt;
:Friday, May 15 (09:30-12:15) The Limits of AI and the Limits of Physics, Part 2&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Introduction&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Artificial Intelligence (AI) is the subfield of Computer Science devoted to developing programs that enable computers to display behavior that can (broadly) be characterized as intelligent. On the strong version, the ultimate goal of AI is to create what is called General Artificial Intelligence (AGI), by which is meant an artificial system that is as intelligent as a human being.&lt;br /&gt;
&lt;br /&gt;
Since its inception in the middle of the last century AI has enjoyed repeated cycles of enthusiasm and disappointment (AI summers and winters). Recent successes of ChatGPT and other Large Language Models (LLMs) have opened a new era of popularization of AI. For the first time, AI tools have been created which are immediately available to the wider population, who for the first time can have real hands-on experience of what AI can do.&lt;br /&gt;
&lt;br /&gt;
These developments in AI open up a series of questions such as:&lt;br /&gt;
&lt;br /&gt;
:Will the powers of AI continue to grow in the future, and if so will they ever reach the point where they can be said to have intelligence equivalent to or greater than that of a human being?&lt;br /&gt;
:Could we ever reach the point where we can accept the thesis that an AI system could have something like consciousness or sentience?&lt;br /&gt;
:Could we reach the point where an AI system could be said to behave ethically, or to have responsibility for its actions?&lt;br /&gt;
:Can quantum computers enable a stronger AI than what we have today?&lt;br /&gt;
:Can a computer have desires, a will, and emotions?&lt;br /&gt;
:Can a computer have responsibility for its behavior?&lt;br /&gt;
:Could a machine have something like a personal identity? Would I really survive if the contents of my brain were uploaded to the cloud?&lt;br /&gt;
&lt;br /&gt;
We will describe in detail how stochastic AI works, and consider these and a series of other questions at the borderlines of philosophy and AI. The class will close with presentations of papers on relevant topics given by students.&lt;br /&gt;
&lt;br /&gt;
Some of the material for this class is derived from our book&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Why Machines Will Never Rule the World: Artificial Intelligence without Fear&#039;&#039; (2nd edition, Routledge 2025).&lt;br /&gt;
and from the companion volume:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Symposium on Why Machines Will Never Rule the World&#039;&#039; — Guest editor, Janna Hastings, University of Zurich&lt;br /&gt;
which appeared as a special issue of the public access journal Cosmos + Taxis in early 2024.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Faculty&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe is the founder and CEO of Cognotekt, GmBH, an AI company based in Cologne specialised in the design and implementation of holistic AI solutions. He has 20 years experience in the AI field, 8 years as a management consultant and software architect. He has also worked as a physician and mathematician, and he views AI itself -- to the extent that it is not an elaborate hype -- as a branch of applied mathematics. CUrrently his primary focus is in the biomathematics of cancer.&lt;br /&gt;
&lt;br /&gt;
Barry Smith is one of the world&#039;s most widely cited philosophers. He has contributed primarily to the field of applied ontology, which means applying philosophical ideas derived from analytical metaphysics to the concrete practical problems which arise where attempts are made to compare or combine heterogeneous bodies of data.&lt;br /&gt;
&lt;br /&gt;
Grading&lt;br /&gt;
&lt;br /&gt;
Essay with presentation: 70%&lt;br /&gt;
Essay with no presentation: 85%&lt;br /&gt;
Presentation: 15%&lt;br /&gt;
Class Participation 15%&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Draft Schedule&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Program&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==1. Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction==&lt;br /&gt;
Part 1: An introduction to the course: Impact of philosophy on AI; impact of AI on philosophy&lt;br /&gt;
&lt;br /&gt;
The types of AI&lt;br /&gt;
&lt;br /&gt;
Deterministic AI&lt;br /&gt;
Good old fashioned AI (GOFAI)&lt;br /&gt;
Basic stochastic AI&lt;br /&gt;
How regression works&lt;br /&gt;
Advanced stochastic AI&lt;br /&gt;
Neural networks and deep learning&lt;br /&gt;
Hybrid&lt;br /&gt;
Neurosymbolic AI&lt;br /&gt;
Background&lt;br /&gt;
Why machines will never rule the world, chapter 7 (chapter 8 of 2nd edition)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
Video&lt;br /&gt;
Why Machines Will Never Rule the World&lt;br /&gt;
Part 2: What are the essential marks of human intelligence?&lt;br /&gt;
&lt;br /&gt;
The classical psychological definitions of intelligence are:  &lt;br /&gt;
&lt;br /&gt;
A. the ability to adapt to new situations (applies both to humans and to animals) &lt;br /&gt;
B. a very general mental capability (possessed only by humans) that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience &lt;br /&gt;
Can a machine be intelligent in either of these senses?&lt;br /&gt;
&lt;br /&gt;
Slides on IQ tests&lt;br /&gt;
Readings:&lt;br /&gt;
&lt;br /&gt;
Linda S. Gottfredson. Mainstream Science on Intelligence. In: Intelligence 24 (1997), pp. 13–23.&lt;br /&gt;
Jobst Landgrebe and Barry Smith: There is no Artificial General Intelligence&lt;br /&gt;
Jobst Landgrebe: Deep reasoning, abstraction and planning&lt;br /&gt;
Background: Ersatz Definitions, Anthropomorphisms, and Pareidolia&lt;br /&gt;
&lt;br /&gt;
There&#039;s no &#039;I&#039; in &#039;AI&#039;, Steven Pemberton, Amsterdam, December 12, 2024&lt;br /&gt;
1. Esatz definitions: using words like &#039;thinks&#039; as in &#039;the machine is thinking&#039;, but with meanings quite different from those we use when talking about human beings. As when we define &#039;flying&#039; as moving through the air, and then jumping up and down and saying &amp;quot;look, I&#039;m flying!&amp;quot;&lt;br /&gt;
2. Pareidolia: a psychological phenomenon that causes people to see patterns, objects, or meaning in ambiguous or unrelated stimuli&lt;br /&gt;
3. If you can&#039;t spot irony, you&#039;re not intelligent&lt;br /&gt;
Tuesday February 18 (09:30-12:15) Limits and Dangers of AI?&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the technical fundamentals of AI: Methods, mathematics, usage as well&lt;br /&gt;
&lt;br /&gt;
2. Outlines the theory of complex systems documented in our book&lt;br /&gt;
&lt;br /&gt;
3. Shows why AI cannot model complex systems adequately and synoptically, and why they therefore cannot reach a level of intelligence equal to that of human beings.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Will AI Destroy Humanity? A Soho Forum Debate (Spoiler: Jobst won)&lt;br /&gt;
&lt;br /&gt;
R.V. Yampolskii, AI: Unexplainable, Unpredictable, Uncontrollable&lt;br /&gt;
&lt;br /&gt;
Arvind Narayanan and Sayash Kapoor, AI Snake Oil&lt;br /&gt;
&lt;br /&gt;
Arnold Schelsky, The Hype Book, especially Chapter 1.&lt;br /&gt;
&lt;br /&gt;
==2	Tuesday May 5 (09:30-12:15) Transhumanism: The Ultimate Stage of Cartesianism ==&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the full spectrum of transhumanism and its cultural origins.&lt;br /&gt;
&lt;br /&gt;
2. Debunk the feasibility of radically improving human beings via technology.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
TESCREALISM, or: why AI gods are so passionate about creating Artificial General Intelligence&lt;br /&gt;
Considering the existential risk of Artificial Superintelligence&lt;br /&gt;
Thursday, February 20 (9:30 - 12:15) Can a machine be conscious?&lt;br /&gt;
The machine will&lt;br /&gt;
&lt;br /&gt;
Computers cannot have a will, because computers don&#039;t give a damn. Therefore there can be no machine ethics&lt;br /&gt;
&lt;br /&gt;
The lack of the giving-a-damn-factor is taken by Yann LeCun as a reason to reject the idea that AI might pose an existential risk to humanity – an AI will have no desire for self-preservation “Almost half of CEOs fear A.I. could destroy humanity five to 10 years from now — but ‘A.I. godfather&#039; says an existential threat is ‘preposterously ridiculous’” Fortune, June 15, 2023. See also here.&lt;br /&gt;
Implications of the absence of a machine will:&lt;br /&gt;
&lt;br /&gt;
The problem of the singularity (when machines will take over from humans) will not arise&lt;br /&gt;
The idea of digital immortality will never be realized Slides&lt;br /&gt;
The idea that human beings are simulations can be rejected&lt;br /&gt;
There can be no AI ethics (only: ethics governing human beings when they use AI)&lt;br /&gt;
Fermi&#039;s paradox is solved&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Searle&#039;s Chinese Room Argument&lt;br /&gt;
Machines cannot have intentionality; they cannot have experiences which are about something.&lt;br /&gt;
&lt;br /&gt;
Searle: Minds, Brains, and Programs&lt;br /&gt;
&lt;br /&gt;
==3.	Wednesday, May 6 (09:30 - 12:15) AI and the Theory of Dynamic and Complex Systems==&lt;br /&gt;
&lt;br /&gt;
==4. Thursday, May 7 (09:30 - 12:15) AI and History, Geography and Law==&lt;br /&gt;
&lt;br /&gt;
==5.	Friday, May 8 (13:30 - 16:15) The Impossibility of Artificial General Intelligence==&lt;br /&gt;
Personal knowledge&lt;br /&gt;
&lt;br /&gt;
Explicit, implicit, practical, personal and tacit knowledge&lt;br /&gt;
Video&lt;br /&gt;
Knowing how vs Knowing that&lt;br /&gt;
Personal knowledge and science&lt;br /&gt;
Creativity&lt;br /&gt;
Empathy&lt;br /&gt;
Entrepreneurship&lt;br /&gt;
Leadership and control (and ruling the world)&lt;br /&gt;
Complex Systems and Cognitive Science: Why the Replication Problem is here to stay&lt;br /&gt;
&lt;br /&gt;
The &#039;replication problem&#039; is the the inability of scientific communities to independently confirm the results of scientific work. Much has been written on this problem especially as it arises in (social) psychology, and on potential solutions under the heading of &#039;open science&#039;. But we will see that the replication problem has plagued medicine as a positive science since its beginnings (Virchov and Pasteur). This problem has become worse over the last 30 years and has massive consequences for healthcare practice and policy.&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
==6. Tuesday, May 12, 9:30 - 12:15)  ==&lt;br /&gt;
&lt;br /&gt;
==Wednesday, May 13 (13:30 - 16:30) Artificial Intelligence and Human Intelligence; The Limits of AI and the Limits of Physics, Part 1==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Personal knowledge&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/AI-and-Creativity Explicit, implicit, practical, personal and tacit knowledge]&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Practical-knowledge Video]&lt;br /&gt;
&lt;br /&gt;
:Knowing how vs Knowing that&lt;br /&gt;
:Personal knowledge and science&lt;br /&gt;
:Creativity&lt;br /&gt;
:Empathy&lt;br /&gt;
:Entrepreneurship&lt;br /&gt;
:Leadership and control (and ruling the world)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Complex Systems and Cognitive Science: Why the Replication Problem is here to stay&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:The &#039;replication problem&#039; is the the inability of scientific communities to independently confirm the results of scientific work. Much has been written on this problem especially as it arises in (social) psychology, and on potential solutions under the heading of &#039;open science&#039;. But we will see that the replication problem has plagued medicine as a positive science since its beginnings (Virchov and Pasteur). This problem has become worse over the last 30 years and has massive consequences for healthcare practice and policy. &lt;br /&gt;
&lt;br /&gt;
[https://buffalo.box.com/s/m3nu15lqjw0qhpqycz3wjsai057p9jf6 Slides]&lt;br /&gt;
&lt;br /&gt;
==Friday May 2 (13:30-16:30) Are We Living in a Simulation? 2: Human Creativity==&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/Living-in-a-Simulation Are we living in a simulation?]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Living-in-a-Simulation Video]&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/AI=and-the-Future The Future of Artificial Intelligence]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
Repication:&lt;br /&gt;
:[https://plato.stanford.edu/entries/scientific-reproducibility Reproducibility of Scientific Results], &#039;&#039;Stanford Encyclopedia of Philosophy&#039;&#039;, 2018&lt;br /&gt;
:[https://www.vox.com/future-perfect/21504366/science-replication-crisis-peer-review-statistics Science has been in a “replication crisis” for a decade]&lt;br /&gt;
:[https://www.youtube.com/watch?v=HhDGkbw1FdwThe Irreproducibility Crisis and the Lehman Crash], Barry Smith, Youtube 2020&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:Room: A23&lt;br /&gt;
&lt;br /&gt;
:9:45 Julien Mommer, What is the Intelligence in &amp;quot;Artificial Intelligence&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
&#039;&#039;&#039;ChatGPT and its Future&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The Indispensability of Human Creativity&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Capabilities: The Interesting Version of the Story&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;Student Presentations&#039;&#039;&#039;&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Background Material==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to AI for Philosophers&#039;&#039;&#039;&lt;br /&gt;
:[https://www.youtube.com/watch?v=cmiY8_XVvzs Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Why-not-robot-cops Slides]&lt;br /&gt;
(AI experts are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to Philosophy for Computer Scientists&#039;&#039;&#039;&lt;br /&gt;
:[https://buffalo.box.com/v/What-is-philosophy Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Crash-Course-Introduction Slides]&lt;br /&gt;
(Philosophers are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[https://www.cp.eng.chula.ac.th/~prabhas/teaching/cbs-it-seminar/2012/aiphil-mccarthy.pdf John McCarthy, &amp;quot;What has AI in common with philosophy?&amp;quot;]&lt;/div&gt;</summary>
		<author><name>Phismith</name></author>
	</entry>
	<entry>
		<id>https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75536</id>
		<title>Philosophy and Artificial Intelligence 2026</title>
		<link rel="alternate" type="text/html" href="https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75536"/>
		<updated>2026-02-20T11:55:59Z</updated>

		<summary type="html">&lt;p&gt;Phismith: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&#039;&#039;&#039;Philosophy and Artificial Intelligence 2026&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe and Barry Smith&lt;br /&gt;
&lt;br /&gt;
MAP, USI, Lugano, Spring 2026&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;DRAFT VERSION&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
:Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction&lt;br /&gt;
:Tuesday May 5 (09:30-12:15) Transhumanism: The Ultimate Stage of Cartesianism&lt;br /&gt;
:Wednesday, May 6 (09:30 - 12:15) AI and the Theory of Dynamic and Complex Systems&lt;br /&gt;
:Thursday, May 7 20 (09:30 - 12:15) AI and History, Geography and Law&lt;br /&gt;
:Friday, May 8 (13:30 - 16:15) The Impossibility of Artificial General Intelligence &lt;br /&gt;
:Tuesday, May 12 (09:30 - 12:15) AI and Creativity&lt;br /&gt;
:Wednesday, May 13 (09:30 - 12:15) Artificial Intelligence and Human Intelligence; The Limits of AI and the Limits of Physics, Part 1&lt;br /&gt;
:Friday, May 15 (09:30-12:15) The Limits of AI and the Limits of Physics, Part 2&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Introduction&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Artificial Intelligence (AI) is the subfield of Computer Science devoted to developing programs that enable computers to display behavior that can (broadly) be characterized as intelligent. On the strong version, the ultimate goal of AI is to create what is called General Artificial Intelligence (AGI), by which is meant an artificial system that is as intelligent as a human being.&lt;br /&gt;
&lt;br /&gt;
Since its inception in the middle of the last century AI has enjoyed repeated cycles of enthusiasm and disappointment (AI summers and winters). Recent successes of ChatGPT and other Large Language Models (LLMs) have opened a new era of popularization of AI. For the first time, AI tools have been created which are immediately available to the wider population, who for the first time can have real hands-on experience of what AI can do.&lt;br /&gt;
&lt;br /&gt;
These developments in AI open up a series of questions such as:&lt;br /&gt;
&lt;br /&gt;
:Will the powers of AI continue to grow in the future, and if so will they ever reach the point where they can be said to have intelligence equivalent to or greater than that of a human being?&lt;br /&gt;
:Could we ever reach the point where we can accept the thesis that an AI system could have something like consciousness or sentience?&lt;br /&gt;
:Could we reach the point where an AI system could be said to behave ethically, or to have responsibility for its actions?&lt;br /&gt;
:Can quantum computers enable a stronger AI than what we have today?&lt;br /&gt;
:Can a computer have desires, a will, and emotions?&lt;br /&gt;
:Can a computer have responsibility for its behavior?&lt;br /&gt;
:Could a machine have something like a personal identity? Would I really survive if the contents of my brain were uploaded to the cloud?&lt;br /&gt;
&lt;br /&gt;
We will describe in detail how stochastic AI works, and consider these and a series of other questions at the borderlines of philosophy and AI. The class will close with presentations of papers on relevant topics given by students.&lt;br /&gt;
&lt;br /&gt;
Some of the material for this class is derived from our book&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Why Machines Will Never Rule the World: Artificial Intelligence without Fear&#039;&#039; (2nd edition, Routledge 2025).&lt;br /&gt;
and from the companion volume:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Symposium on Why Machines Will Never Rule the World&#039;&#039; — Guest editor, Janna Hastings, University of Zurich&lt;br /&gt;
which appeared as a special issue of the public access journal Cosmos + Taxis in early 2024.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Faculty&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe is the founder and CEO of Cognotekt, GmBH, an AI company based in Cologne specialised in the design and implementation of holistic AI solutions. He has 20 years experience in the AI field, 8 years as a management consultant and software architect. He has also worked as a physician and mathematician, and he views AI itself -- to the extent that it is not an elaborate hype -- as a branch of applied mathematics. CUrrently his primary focus is in the biomathematics of cancer.&lt;br /&gt;
&lt;br /&gt;
Barry Smith is one of the world&#039;s most widely cited philosophers. He has contributed primarily to the field of applied ontology, which means applying philosophical ideas derived from analytical metaphysics to the concrete practical problems which arise where attempts are made to compare or combine heterogeneous bodies of data.&lt;br /&gt;
&lt;br /&gt;
Grading&lt;br /&gt;
&lt;br /&gt;
Essay with presentation: 70%&lt;br /&gt;
Essay with no presentation: 85%&lt;br /&gt;
Presentation: 15%&lt;br /&gt;
Class Participation 15%&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Draft Schedule&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Program&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==1. Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction==&lt;br /&gt;
Part 1: An introduction to the course: Impact of philosophy on AI; impact of AI on philosophy&lt;br /&gt;
&lt;br /&gt;
The types of AI&lt;br /&gt;
&lt;br /&gt;
Deterministic AI&lt;br /&gt;
Good old fashioned AI (GOFAI)&lt;br /&gt;
Basic stochastic AI&lt;br /&gt;
How regression works&lt;br /&gt;
Advanced stochastic AI&lt;br /&gt;
Neural networks and deep learning&lt;br /&gt;
Hybrid&lt;br /&gt;
Neurosymbolic AI&lt;br /&gt;
Background&lt;br /&gt;
Why machines will never rule the world, chapter 7 (chapter 8 of 2nd edition)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
Video&lt;br /&gt;
Why Machines Will Never Rule the World&lt;br /&gt;
Part 2: What are the essential marks of human intelligence?&lt;br /&gt;
&lt;br /&gt;
The classical psychological definitions of intelligence are:  &lt;br /&gt;
&lt;br /&gt;
A. the ability to adapt to new situations (applies both to humans and to animals) &lt;br /&gt;
B. a very general mental capability (possessed only by humans) that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience &lt;br /&gt;
Can a machine be intelligent in either of these senses?&lt;br /&gt;
&lt;br /&gt;
Slides on IQ tests&lt;br /&gt;
Readings:&lt;br /&gt;
&lt;br /&gt;
Linda S. Gottfredson. Mainstream Science on Intelligence. In: Intelligence 24 (1997), pp. 13–23.&lt;br /&gt;
Jobst Landgrebe and Barry Smith: There is no Artificial General Intelligence&lt;br /&gt;
Jobst Landgrebe: Deep reasoning, abstraction and planning&lt;br /&gt;
Background: Ersatz Definitions, Anthropomorphisms, and Pareidolia&lt;br /&gt;
&lt;br /&gt;
There&#039;s no &#039;I&#039; in &#039;AI&#039;, Steven Pemberton, Amsterdam, December 12, 2024&lt;br /&gt;
1. Esatz definitions: using words like &#039;thinks&#039; as in &#039;the machine is thinking&#039;, but with meanings quite different from those we use when talking about human beings. As when we define &#039;flying&#039; as moving through the air, and then jumping up and down and saying &amp;quot;look, I&#039;m flying!&amp;quot;&lt;br /&gt;
2. Pareidolia: a psychological phenomenon that causes people to see patterns, objects, or meaning in ambiguous or unrelated stimuli&lt;br /&gt;
3. If you can&#039;t spot irony, you&#039;re not intelligent&lt;br /&gt;
Tuesday February 18 (09:30-12:15) Limits and Dangers of AI?&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the technical fundamentals of AI: Methods, mathematics, usage as well&lt;br /&gt;
&lt;br /&gt;
2. Outlines the theory of complex systems documented in our book&lt;br /&gt;
&lt;br /&gt;
3. Shows why AI cannot model complex systems adequately and synoptically, and why they therefore cannot reach a level of intelligence equal to that of human beings.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Will AI Destroy Humanity? A Soho Forum Debate (Spoiler: Jobst won)&lt;br /&gt;
&lt;br /&gt;
R.V. Yampolskii, AI: Unexplainable, Unpredictable, Uncontrollable&lt;br /&gt;
&lt;br /&gt;
Arvind Narayanan and Sayash Kapoor, AI Snake Oil&lt;br /&gt;
&lt;br /&gt;
Arnold Schelsky, The Hype Book, especially Chapter 1.&lt;br /&gt;
&lt;br /&gt;
==2	Tuesday May 5 (09:30-12:15) Transhumanism: The Ultimate Stage of Cartesianism ==&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the full spectrum of transhumanism and its cultural origins.&lt;br /&gt;
&lt;br /&gt;
2. Debunk the feasibility of radically improving human beings via technology.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
TESCREALISM, or: why AI gods are so passionate about creating Artificial General Intelligence&lt;br /&gt;
Considering the existential risk of Artificial Superintelligence&lt;br /&gt;
Thursday, February 20 (9:30 - 12:15) Can a machine be conscious?&lt;br /&gt;
The machine will&lt;br /&gt;
&lt;br /&gt;
Computers cannot have a will, because computers don&#039;t give a damn. Therefore there can be no machine ethics&lt;br /&gt;
&lt;br /&gt;
The lack of the giving-a-damn-factor is taken by Yann LeCun as a reason to reject the idea that AI might pose an existential risk to humanity – an AI will have no desire for self-preservation “Almost half of CEOs fear A.I. could destroy humanity five to 10 years from now — but ‘A.I. godfather&#039; says an existential threat is ‘preposterously ridiculous’” Fortune, June 15, 2023. See also here.&lt;br /&gt;
Implications of the absence of a machine will:&lt;br /&gt;
&lt;br /&gt;
The problem of the singularity (when machines will take over from humans) will not arise&lt;br /&gt;
The idea of digital immortality will never be realized Slides&lt;br /&gt;
The idea that human beings are simulations can be rejected&lt;br /&gt;
There can be no AI ethics (only: ethics governing human beings when they use AI)&lt;br /&gt;
Fermi&#039;s paradox is solved&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Searle&#039;s Chinese Room Argument&lt;br /&gt;
Machines cannot have intentionality; they cannot have experiences which are about something.&lt;br /&gt;
&lt;br /&gt;
Searle: Minds, Brains, and Programs&lt;br /&gt;
&lt;br /&gt;
==3.	Wednesday, May 6 (09:30 - 12:15) AI and the Theory of Dynamic and Complex Systems==&lt;br /&gt;
&lt;br /&gt;
==4. Thursday, May 7 (09:30 - 12:15) AI and History, Geography and Law==&lt;br /&gt;
&lt;br /&gt;
==5.	Friday, May 8 (13:30 - 16:15) The Impossibility of Artificial General Intelligence==&lt;br /&gt;
Personal knowledge&lt;br /&gt;
&lt;br /&gt;
Explicit, implicit, practical, personal and tacit knowledge&lt;br /&gt;
Video&lt;br /&gt;
Knowing how vs Knowing that&lt;br /&gt;
Personal knowledge and science&lt;br /&gt;
Creativity&lt;br /&gt;
Empathy&lt;br /&gt;
Entrepreneurship&lt;br /&gt;
Leadership and control (and ruling the world)&lt;br /&gt;
Complex Systems and Cognitive Science: Why the Replication Problem is here to stay&lt;br /&gt;
&lt;br /&gt;
The &#039;replication problem&#039; is the the inability of scientific communities to independently confirm the results of scientific work. Much has been written on this problem especially as it arises in (social) psychology, and on potential solutions under the heading of &#039;open science&#039;. But we will see that the replication problem has plagued medicine as a positive science since its beginnings (Virchov and Pasteur). This problem has become worse over the last 30 years and has massive consequences for healthcare practice and policy.&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
==6. Tuesday, May 12, 9:30 - 12:15)  ==&lt;br /&gt;
&lt;br /&gt;
==Wednesday April 30 (13:30 - 16:30) Explicit, implicit, practical, personal and tacit knowledge==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Personal knowledge&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/AI-and-Creativity Explicit, implicit, practical, personal and tacit knowledge]&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Practical-knowledge Video]&lt;br /&gt;
&lt;br /&gt;
:Knowing how vs Knowing that&lt;br /&gt;
:Personal knowledge and science&lt;br /&gt;
:Creativity&lt;br /&gt;
:Empathy&lt;br /&gt;
:Entrepreneurship&lt;br /&gt;
:Leadership and control (and ruling the world)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Complex Systems and Cognitive Science: Why the Replication Problem is here to stay&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:The &#039;replication problem&#039; is the the inability of scientific communities to independently confirm the results of scientific work. Much has been written on this problem especially as it arises in (social) psychology, and on potential solutions under the heading of &#039;open science&#039;. But we will see that the replication problem has plagued medicine as a positive science since its beginnings (Virchov and Pasteur). This problem has become worse over the last 30 years and has massive consequences for healthcare practice and policy. &lt;br /&gt;
&lt;br /&gt;
[https://buffalo.box.com/s/m3nu15lqjw0qhpqycz3wjsai057p9jf6 Slides]&lt;br /&gt;
&lt;br /&gt;
==Friday May 2 (13:30-16:30) Are We Living in a Simulation? 2: Human Creativity==&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/Living-in-a-Simulation Are we living in a simulation?]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Living-in-a-Simulation Video]&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/AI=and-the-Future The Future of Artificial Intelligence]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
Repication:&lt;br /&gt;
:[https://plato.stanford.edu/entries/scientific-reproducibility Reproducibility of Scientific Results], &#039;&#039;Stanford Encyclopedia of Philosophy&#039;&#039;, 2018&lt;br /&gt;
:[https://www.vox.com/future-perfect/21504366/science-replication-crisis-peer-review-statistics Science has been in a “replication crisis” for a decade]&lt;br /&gt;
:[https://www.youtube.com/watch?v=HhDGkbw1FdwThe Irreproducibility Crisis and the Lehman Crash], Barry Smith, Youtube 2020&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:Room: A23&lt;br /&gt;
&lt;br /&gt;
:9:45 Julien Mommer, What is the Intelligence in &amp;quot;Artificial Intelligence&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
&#039;&#039;&#039;ChatGPT and its Future&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The Indispensability of Human Creativity&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Capabilities: The Interesting Version of the Story&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;Student Presentations&#039;&#039;&#039;&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Background Material==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to AI for Philosophers&#039;&#039;&#039;&lt;br /&gt;
:[https://www.youtube.com/watch?v=cmiY8_XVvzs Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Why-not-robot-cops Slides]&lt;br /&gt;
(AI experts are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to Philosophy for Computer Scientists&#039;&#039;&#039;&lt;br /&gt;
:[https://buffalo.box.com/v/What-is-philosophy Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Crash-Course-Introduction Slides]&lt;br /&gt;
(Philosophers are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[https://www.cp.eng.chula.ac.th/~prabhas/teaching/cbs-it-seminar/2012/aiphil-mccarthy.pdf John McCarthy, &amp;quot;What has AI in common with philosophy?&amp;quot;]&lt;/div&gt;</summary>
		<author><name>Phismith</name></author>
	</entry>
	<entry>
		<id>https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75535</id>
		<title>Philosophy and Artificial Intelligence 2026</title>
		<link rel="alternate" type="text/html" href="https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75535"/>
		<updated>2026-02-20T11:45:48Z</updated>

		<summary type="html">&lt;p&gt;Phismith: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&#039;&#039;&#039;Philosophy and Artificial Intelligence 2026&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe and Barry Smith&lt;br /&gt;
&lt;br /&gt;
MAP, USI, Lugano, Spring 2026&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;DRAFT VERSION&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
:Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction&lt;br /&gt;
:Tuesday May 5 (09:30-12:15) Transhumanism: The Ultimate Stage of Cartesianism&lt;br /&gt;
:Wednesday, May 6 (09:30 - 12:15) AI and the Theory of Dynamic and Complex Systems&lt;br /&gt;
:Thursday, May 7 20 (09:30 - 12:15) AI and History, Geography and Law&lt;br /&gt;
:Friday, May 8 (13:30 - 16:15) The Impossibility of Artificial General Intelligence &lt;br /&gt;
:Tuesday, May 12 (09:30 - 12:15) AI and Creativity&lt;br /&gt;
:Wednesday, May 13 (09:30 - 12:15) Artificial Intelligence and Human Intelligence; The Limits of AI and the Limits of Physics, Part 1&lt;br /&gt;
:Friday, May 15 (09:30-12:15) The Limits of AI and the Limits of Physics, Part 2&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Introduction&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Artificial Intelligence (AI) is the subfield of Computer Science devoted to developing programs that enable computers to display behavior that can (broadly) be characterized as intelligent. On the strong version, the ultimate goal of AI is to create what is called General Artificial Intelligence (AGI), by which is meant an artificial system that is as intelligent as a human being.&lt;br /&gt;
&lt;br /&gt;
Since its inception in the middle of the last century AI has enjoyed repeated cycles of enthusiasm and disappointment (AI summers and winters). Recent successes of ChatGPT and other Large Language Models (LLMs) have opened a new era of popularization of AI. For the first time, AI tools have been created which are immediately available to the wider population, who for the first time can have real hands-on experience of what AI can do.&lt;br /&gt;
&lt;br /&gt;
These developments in AI open up a series of questions such as:&lt;br /&gt;
&lt;br /&gt;
:Will the powers of AI continue to grow in the future, and if so will they ever reach the point where they can be said to have intelligence equivalent to or greater than that of a human being?&lt;br /&gt;
:Could we ever reach the point where we can accept the thesis that an AI system could have something like consciousness or sentience?&lt;br /&gt;
:Could we reach the point where an AI system could be said to behave ethically, or to have responsibility for its actions?&lt;br /&gt;
:Can quantum computers enable a stronger AI than what we have today?&lt;br /&gt;
:Can a computer have desires, a will, and emotions?&lt;br /&gt;
:Can a computer have responsibility for its behavior?&lt;br /&gt;
:Could a machine have something like a personal identity? Would I really survive if the contents of my brain were uploaded to the cloud?&lt;br /&gt;
&lt;br /&gt;
We will describe in detail how stochastic AI works, and consider these and a series of other questions at the borderlines of philosophy and AI. The class will close with presentations of papers on relevant topics given by students.&lt;br /&gt;
&lt;br /&gt;
Some of the material for this class is derived from our book&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Why Machines Will Never Rule the World: Artificial Intelligence without Fear&#039;&#039; (2nd edition, Routledge 2025).&lt;br /&gt;
and from the companion volume:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Symposium on Why Machines Will Never Rule the World&#039;&#039; — Guest editor, Janna Hastings, University of Zurich&lt;br /&gt;
which appeared as a special issue of the public access journal Cosmos + Taxis in early 2024.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Faculty&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe is the founder and CEO of Cognotekt, GmBH, an AI company based in Cologne specialised in the design and implementation of holistic AI solutions. He has 20 years experience in the AI field, 8 years as a management consultant and software architect. He has also worked as a physician and mathematician, and he views AI itself -- to the extent that it is not an elaborate hype -- as a branch of applied mathematics. CUrrently his primary focus is in the biomathematics of cancer.&lt;br /&gt;
&lt;br /&gt;
Barry Smith is one of the world&#039;s most widely cited philosophers. He has contributed primarily to the field of applied ontology, which means applying philosophical ideas derived from analytical metaphysics to the concrete practical problems which arise where attempts are made to compare or combine heterogeneous bodies of data.&lt;br /&gt;
&lt;br /&gt;
Grading&lt;br /&gt;
&lt;br /&gt;
Essay with presentation: 70%&lt;br /&gt;
Essay with no presentation: 85%&lt;br /&gt;
Presentation: 15%&lt;br /&gt;
Class Participation 15%&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Draft Schedule&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Program&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==1. Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction==&lt;br /&gt;
Part 1: An introduction to the course: Impact of philosophy on AI; impact of AI on philosophy&lt;br /&gt;
&lt;br /&gt;
The types of AI&lt;br /&gt;
&lt;br /&gt;
Deterministic AI&lt;br /&gt;
Good old fashioned AI (GOFAI)&lt;br /&gt;
Basic stochastic AI&lt;br /&gt;
How regression works&lt;br /&gt;
Advanced stochastic AI&lt;br /&gt;
Neural networks and deep learning&lt;br /&gt;
Hybrid&lt;br /&gt;
Neurosymbolic AI&lt;br /&gt;
Background&lt;br /&gt;
Why machines will never rule the world, chapter 7 (chapter 8 of 2nd edition)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
Video&lt;br /&gt;
Why Machines Will Never Rule the World&lt;br /&gt;
Part 2: What are the essential marks of human intelligence?&lt;br /&gt;
&lt;br /&gt;
The classical psychological definitions of intelligence are:  &lt;br /&gt;
&lt;br /&gt;
A. the ability to adapt to new situations (applies both to humans and to animals) &lt;br /&gt;
B. a very general mental capability (possessed only by humans) that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience &lt;br /&gt;
Can a machine be intelligent in either of these senses?&lt;br /&gt;
&lt;br /&gt;
Slides on IQ tests&lt;br /&gt;
Readings:&lt;br /&gt;
&lt;br /&gt;
Linda S. Gottfredson. Mainstream Science on Intelligence. In: Intelligence 24 (1997), pp. 13–23.&lt;br /&gt;
Jobst Landgrebe and Barry Smith: There is no Artificial General Intelligence&lt;br /&gt;
Jobst Landgrebe: Deep reasoning, abstraction and planning&lt;br /&gt;
Background: Ersatz Definitions, Anthropomorphisms, and Pareidolia&lt;br /&gt;
&lt;br /&gt;
There&#039;s no &#039;I&#039; in &#039;AI&#039;, Steven Pemberton, Amsterdam, December 12, 2024&lt;br /&gt;
1. Esatz definitions: using words like &#039;thinks&#039; as in &#039;the machine is thinking&#039;, but with meanings quite different from those we use when talking about human beings. As when we define &#039;flying&#039; as moving through the air, and then jumping up and down and saying &amp;quot;look, I&#039;m flying!&amp;quot;&lt;br /&gt;
2. Pareidolia: a psychological phenomenon that causes people to see patterns, objects, or meaning in ambiguous or unrelated stimuli&lt;br /&gt;
3. If you can&#039;t spot irony, you&#039;re not intelligent&lt;br /&gt;
Tuesday February 18 (09:30-12:15) Limits and Dangers of AI?&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the technical fundamentals of AI: Methods, mathematics, usage as well&lt;br /&gt;
&lt;br /&gt;
2. Outlines the theory of complex systems documented in our book&lt;br /&gt;
&lt;br /&gt;
3. Shows why AI cannot model complex systems adequately and synoptically, and why they therefore cannot reach a level of intelligence equal to that of human beings.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Will AI Destroy Humanity? A Soho Forum Debate (Spoiler: Jobst won)&lt;br /&gt;
&lt;br /&gt;
R.V. Yampolskii, AI: Unexplainable, Unpredictable, Uncontrollable&lt;br /&gt;
&lt;br /&gt;
Arvind Narayanan and Sayash Kapoor, AI Snake Oil&lt;br /&gt;
&lt;br /&gt;
Arnold Schelsky, The Hype Book, especially Chapter 1.&lt;br /&gt;
&lt;br /&gt;
==2	Tuesday May 5 (09:30-12:15) Transhumanism: The Ultimate Stage of Cartesianism ==&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the full spectrum of transhumanism and its cultural origins.&lt;br /&gt;
&lt;br /&gt;
2. Debunk the feasibility of radically improving human beings via technology.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
TESCREALISM, or: why AI gods are so passionate about creating Artificial General Intelligence&lt;br /&gt;
Considering the existential risk of Artificial Superintelligence&lt;br /&gt;
Thursday, February 20 (9:30 - 12:15) Can a machine be conscious?&lt;br /&gt;
The machine will&lt;br /&gt;
&lt;br /&gt;
Computers cannot have a will, because computers don&#039;t give a damn. Therefore there can be no machine ethics&lt;br /&gt;
&lt;br /&gt;
The lack of the giving-a-damn-factor is taken by Yann LeCun as a reason to reject the idea that AI might pose an existential risk to humanity – an AI will have no desire for self-preservation “Almost half of CEOs fear A.I. could destroy humanity five to 10 years from now — but ‘A.I. godfather&#039; says an existential threat is ‘preposterously ridiculous’” Fortune, June 15, 2023. See also here.&lt;br /&gt;
Implications of the absence of a machine will:&lt;br /&gt;
&lt;br /&gt;
The problem of the singularity (when machines will take over from humans) will not arise&lt;br /&gt;
The idea of digital immortality will never be realized Slides&lt;br /&gt;
The idea that human beings are simulations can be rejected&lt;br /&gt;
There can be no AI ethics (only: ethics governing human beings when they use AI)&lt;br /&gt;
Fermi&#039;s paradox is solved&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Searle&#039;s Chinese Room Argument&lt;br /&gt;
Machines cannot have intentionality; they cannot have experiences which are about something.&lt;br /&gt;
&lt;br /&gt;
Searle: Minds, Brains, and Programs&lt;br /&gt;
&lt;br /&gt;
==3.	Wednesday, May 6 (09:30 - 12:15) AI and the Theory of Dynamic and Complex Systems==&lt;br /&gt;
&lt;br /&gt;
==4. Thursday, May 7 (09:30 - 12:15) AI and History, Geography and Law==&lt;br /&gt;
&lt;br /&gt;
==5.	Friday, May 8 (13:30 - 16:15) The Impossibility of Artificial General Intelligence==&lt;br /&gt;
Personal knowledge&lt;br /&gt;
&lt;br /&gt;
Explicit, implicit, practical, personal and tacit knowledge&lt;br /&gt;
Video&lt;br /&gt;
Knowing how vs Knowing that&lt;br /&gt;
Personal knowledge and science&lt;br /&gt;
Creativity&lt;br /&gt;
Empathy&lt;br /&gt;
Entrepreneurship&lt;br /&gt;
Leadership and control (and ruling the world)&lt;br /&gt;
Complex Systems and Cognitive Science: Why the Replication Problem is here to stay&lt;br /&gt;
&lt;br /&gt;
The &#039;replication problem&#039; is the the inability of scientific communities to independently confirm the results of scientific work. Much has been written on this problem especially as it arises in (social) psychology, and on potential solutions under the heading of &#039;open science&#039;. But we will see that the replication problem has plagued medicine as a positive science since its beginnings (Virchov and Pasteur). This problem has become worse over the last 30 years and has massive consequences for healthcare practice and policy.&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
==Tuesday, April 29 (13:30 - 16:30) An introduction to the statistical foundations of AI ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[https://buffalo.box.com/v/AI-and-the-Future An introduction to the statistical foundations of AI]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[https://buffalo.box.com/v/Statistical-Foundations-of-AI Video]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The types of AI&lt;br /&gt;
&lt;br /&gt;
:Deterministic AI&lt;br /&gt;
::Good old fashioned AI (GOFAI)&lt;br /&gt;
:Basic stochastic AI&lt;br /&gt;
::How regression works&lt;br /&gt;
:Advanced stochastic AI&lt;br /&gt;
::Neural networks and deep learning&lt;br /&gt;
:Hybrid&lt;br /&gt;
::Neurosymbolic AI&lt;br /&gt;
&lt;br /&gt;
:Background &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Why machines will never rule the world&#039;&#039;, chapter 7 (chapter 8 of 2nd edition)&lt;br /&gt;
&lt;br /&gt;
==Wednesday April 30 (13:30 - 16:30) Explicit, implicit, practical, personal and tacit knowledge==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Personal knowledge&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/AI-and-Creativity Explicit, implicit, practical, personal and tacit knowledge]&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Practical-knowledge Video]&lt;br /&gt;
&lt;br /&gt;
:Knowing how vs Knowing that&lt;br /&gt;
:Personal knowledge and science&lt;br /&gt;
:Creativity&lt;br /&gt;
:Empathy&lt;br /&gt;
:Entrepreneurship&lt;br /&gt;
:Leadership and control (and ruling the world)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Complex Systems and Cognitive Science: Why the Replication Problem is here to stay&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:The &#039;replication problem&#039; is the the inability of scientific communities to independently confirm the results of scientific work. Much has been written on this problem especially as it arises in (social) psychology, and on potential solutions under the heading of &#039;open science&#039;. But we will see that the replication problem has plagued medicine as a positive science since its beginnings (Virchov and Pasteur). This problem has become worse over the last 30 years and has massive consequences for healthcare practice and policy. &lt;br /&gt;
&lt;br /&gt;
[https://buffalo.box.com/s/m3nu15lqjw0qhpqycz3wjsai057p9jf6 Slides]&lt;br /&gt;
&lt;br /&gt;
==Friday May 2 (13:30-16:30) Are We Living in a Simulation? 2: Human Creativity==&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/Living-in-a-Simulation Are we living in a simulation?]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Living-in-a-Simulation Video]&lt;br /&gt;
:&#039;&#039;&#039;[https://buffalo.box.com/v/AI=and-the-Future The Future of Artificial Intelligence]&#039;&#039;&#039;, Slides&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
Repication:&lt;br /&gt;
:[https://plato.stanford.edu/entries/scientific-reproducibility Reproducibility of Scientific Results], &#039;&#039;Stanford Encyclopedia of Philosophy&#039;&#039;, 2018&lt;br /&gt;
:[https://www.vox.com/future-perfect/21504366/science-replication-crisis-peer-review-statistics Science has been in a “replication crisis” for a decade]&lt;br /&gt;
:[https://www.youtube.com/watch?v=HhDGkbw1FdwThe Irreproducibility Crisis and the Lehman Crash], Barry Smith, Youtube 2020&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
:Room: A23&lt;br /&gt;
&lt;br /&gt;
:9:45 Julien Mommer, What is the Intelligence in &amp;quot;Artificial Intelligence&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
&#039;&#039;&#039;ChatGPT and its Future&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The Indispensability of Human Creativity&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Capabilities: The Interesting Version of the Story&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;Student Presentations&#039;&#039;&#039;&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Background Material==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to AI for Philosophers&#039;&#039;&#039;&lt;br /&gt;
:[https://www.youtube.com/watch?v=cmiY8_XVvzs Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Why-not-robot-cops Slides]&lt;br /&gt;
(AI experts are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to Philosophy for Computer Scientists&#039;&#039;&#039;&lt;br /&gt;
:[https://buffalo.box.com/v/What-is-philosophy Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Crash-Course-Introduction Slides]&lt;br /&gt;
(Philosophers are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[https://www.cp.eng.chula.ac.th/~prabhas/teaching/cbs-it-seminar/2012/aiphil-mccarthy.pdf John McCarthy, &amp;quot;What has AI in common with philosophy?&amp;quot;]&lt;/div&gt;</summary>
		<author><name>Phismith</name></author>
	</entry>
	<entry>
		<id>https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75534</id>
		<title>Philosophy and Artificial Intelligence 2026</title>
		<link rel="alternate" type="text/html" href="https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75534"/>
		<updated>2026-02-20T11:41:20Z</updated>

		<summary type="html">&lt;p&gt;Phismith: /* Friday May 2 (13:30-16:30) Are We Living in a Simulation? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&#039;&#039;&#039;Philosophy and Artificial Intelligence 2026&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe and Barry Smith&lt;br /&gt;
&lt;br /&gt;
MAP, USI, Lugano, Spring 2026&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;DRAFT VERSION&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
:Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction&lt;br /&gt;
:Tuesday May 5 (09:30-12:15) Transhumanism: The Ultimate Stage of Cartesianism&lt;br /&gt;
:Wednesday, May 6 (09:30 - 12:15) AI and the Theory of Dynamic and Complex Systems&lt;br /&gt;
:Thursday, May 7 20 (09:30 - 12:15) AI and History, Geography and Law&lt;br /&gt;
:Friday, May 8 (13:30 - 16:15) The Impossibility of Artificial General Intelligence &lt;br /&gt;
:Tuesday, May 12 (09:30 - 12:15) AI and Creativity&lt;br /&gt;
:Wednesday, May 13 (09:30 - 12:15) Artificial Intelligence and Human Intelligence; The Limits of AI and the Limits of Physics, Part 1&lt;br /&gt;
:Friday, May 15 (09:30-12:15) The Limits of AI and the Limits of Physics, Part 2&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Introduction&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Artificial Intelligence (AI) is the subfield of Computer Science devoted to developing programs that enable computers to display behavior that can (broadly) be characterized as intelligent. On the strong version, the ultimate goal of AI is to create what is called General Artificial Intelligence (AGI), by which is meant an artificial system that is as intelligent as a human being.&lt;br /&gt;
&lt;br /&gt;
Since its inception in the middle of the last century AI has enjoyed repeated cycles of enthusiasm and disappointment (AI summers and winters). Recent successes of ChatGPT and other Large Language Models (LLMs) have opened a new era of popularization of AI. For the first time, AI tools have been created which are immediately available to the wider population, who for the first time can have real hands-on experience of what AI can do.&lt;br /&gt;
&lt;br /&gt;
These developments in AI open up a series of questions such as:&lt;br /&gt;
&lt;br /&gt;
:Will the powers of AI continue to grow in the future, and if so will they ever reach the point where they can be said to have intelligence equivalent to or greater than that of a human being?&lt;br /&gt;
:Could we ever reach the point where we can accept the thesis that an AI system could have something like consciousness or sentience?&lt;br /&gt;
:Could we reach the point where an AI system could be said to behave ethically, or to have responsibility for its actions?&lt;br /&gt;
:Can quantum computers enable a stronger AI than what we have today?&lt;br /&gt;
:Can a computer have desires, a will, and emotions?&lt;br /&gt;
:Can a computer have responsibility for its behavior?&lt;br /&gt;
:Could a machine have something like a personal identity? Would I really survive if the contents of my brain were uploaded to the cloud?&lt;br /&gt;
&lt;br /&gt;
We will describe in detail how stochastic AI works, and consider these and a series of other questions at the borderlines of philosophy and AI. The class will close with presentations of papers on relevant topics given by students.&lt;br /&gt;
&lt;br /&gt;
Some of the material for this class is derived from our book&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Why Machines Will Never Rule the World: Artificial Intelligence without Fear&#039;&#039; (2nd edition, Routledge 2025).&lt;br /&gt;
and from the companion volume:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Symposium on Why Machines Will Never Rule the World&#039;&#039; — Guest editor, Janna Hastings, University of Zurich&lt;br /&gt;
which appeared as a special issue of the public access journal Cosmos + Taxis in early 2024.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Faculty&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe is the founder and CEO of Cognotekt, GmBH, an AI company based in Cologne specialised in the design and implementation of holistic AI solutions. He has 20 years experience in the AI field, 8 years as a management consultant and software architect. He has also worked as a physician and mathematician, and he views AI itself -- to the extent that it is not an elaborate hype -- as a branch of applied mathematics. CUrrently his primary focus is in the biomathematics of cancer.&lt;br /&gt;
&lt;br /&gt;
Barry Smith is one of the world&#039;s most widely cited philosophers. He has contributed primarily to the field of applied ontology, which means applying philosophical ideas derived from analytical metaphysics to the concrete practical problems which arise where attempts are made to compare or combine heterogeneous bodies of data.&lt;br /&gt;
&lt;br /&gt;
Grading&lt;br /&gt;
&lt;br /&gt;
Essay with presentation: 70%&lt;br /&gt;
Essay with no presentation: 85%&lt;br /&gt;
Presentation: 15%&lt;br /&gt;
Class Participation 15%&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Draft Schedule&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Program&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==1. Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction==&lt;br /&gt;
Part 1: An introduction to the course: Impact of philosophy on AI; impact of AI on philosophy&lt;br /&gt;
&lt;br /&gt;
The types of AI&lt;br /&gt;
&lt;br /&gt;
Deterministic AI&lt;br /&gt;
Good old fashioned AI (GOFAI)&lt;br /&gt;
Basic stochastic AI&lt;br /&gt;
How regression works&lt;br /&gt;
Advanced stochastic AI&lt;br /&gt;
Neural networks and deep learning&lt;br /&gt;
Hybrid&lt;br /&gt;
Neurosymbolic AI&lt;br /&gt;
Background&lt;br /&gt;
Why machines will never rule the world, chapter 7 (chapter 8 of 2nd edition)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
Video&lt;br /&gt;
Why Machines Will Never Rule the World&lt;br /&gt;
Part 2: What are the essential marks of human intelligence?&lt;br /&gt;
&lt;br /&gt;
The classical psychological definitions of intelligence are:  &lt;br /&gt;
&lt;br /&gt;
A. the ability to adapt to new situations (applies both to humans and to animals) &lt;br /&gt;
B. a very general mental capability (possessed only by humans) that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience &lt;br /&gt;
Can a machine be intelligent in either of these senses?&lt;br /&gt;
&lt;br /&gt;
Slides on IQ tests&lt;br /&gt;
Readings:&lt;br /&gt;
&lt;br /&gt;
Linda S. Gottfredson. Mainstream Science on Intelligence. In: Intelligence 24 (1997), pp. 13–23.&lt;br /&gt;
Jobst Landgrebe and Barry Smith: There is no Artificial General Intelligence&lt;br /&gt;
Jobst Landgrebe: Deep reasoning, abstraction and planning&lt;br /&gt;
Background: Ersatz Definitions, Anthropomorphisms, and Pareidolia&lt;br /&gt;
&lt;br /&gt;
There&#039;s no &#039;I&#039; in &#039;AI&#039;, Steven Pemberton, Amsterdam, December 12, 2024&lt;br /&gt;
1. Esatz definitions: using words like &#039;thinks&#039; as in &#039;the machine is thinking&#039;, but with meanings quite different from those we use when talking about human beings. As when we define &#039;flying&#039; as moving through the air, and then jumping up and down and saying &amp;quot;look, I&#039;m flying!&amp;quot;&lt;br /&gt;
2. Pareidolia: a psychological phenomenon that causes people to see patterns, objects, or meaning in ambiguous or unrelated stimuli&lt;br /&gt;
3. If you can&#039;t spot irony, you&#039;re not intelligent&lt;br /&gt;
Tuesday February 18 (09:30-12:15) Limits and Dangers of AI?&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the technical fundamentals of AI: Methods, mathematics, usage as well&lt;br /&gt;
&lt;br /&gt;
2. Outlines the theory of complex systems documented in our book&lt;br /&gt;
&lt;br /&gt;
3. Shows why AI cannot model complex systems adequately and synoptically, and why they therefore cannot reach a level of intelligence equal to that of human beings.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Will AI Destroy Humanity? A Soho Forum Debate (Spoiler: Jobst won)&lt;br /&gt;
&lt;br /&gt;
R.V. Yampolskii, AI: Unexplainable, Unpredictable, Uncontrollable&lt;br /&gt;
&lt;br /&gt;
Arvind Narayanan and Sayash Kapoor, AI Snake Oil&lt;br /&gt;
&lt;br /&gt;
Arnold Schelsky, The Hype Book, especially Chapter 1.&lt;br /&gt;
&lt;br /&gt;
==2	Tuesday May 5 (09:30-12:15) Transhumanism: The Ultimate Stage of Cartesianism ==&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the full spectrum of transhumanism and its cultural origins.&lt;br /&gt;
&lt;br /&gt;
2. Debunk the feasibility of radically improving human beings via technology.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
TESCREALISM, or: why AI gods are so passionate about creating Artificial General Intelligence&lt;br /&gt;
Considering the existential risk of Artificial Superintelligence&lt;br /&gt;
Thursday, February 20 (9:30 - 12:15) Can a machine be conscious?&lt;br /&gt;
The machine will&lt;br /&gt;
&lt;br /&gt;
Computers cannot have a will, because computers don&#039;t give a damn. Therefore there can be no machine ethics&lt;br /&gt;
&lt;br /&gt;
The lack of the giving-a-damn-factor is taken by Yann LeCun as a reason to reject the idea that AI might pose an existential risk to humanity – an AI will have no desire for self-preservation “Almost half of CEOs fear A.I. could destroy humanity five to 10 years from now — but ‘A.I. godfather&#039; says an existential threat is ‘preposterously ridiculous’” Fortune, June 15, 2023. See also here.&lt;br /&gt;
Implications of the absence of a machine will:&lt;br /&gt;
&lt;br /&gt;
The problem of the singularity (when machines will take over from humans) will not arise&lt;br /&gt;
The idea of digital immortality will never be realized Slides&lt;br /&gt;
The idea that human beings are simulations can be rejected&lt;br /&gt;
There can be no AI ethics (only: ethics governing human beings when they use AI)&lt;br /&gt;
Fermi&#039;s paradox is solved&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Searle&#039;s Chinese Room Argument&lt;br /&gt;
Machines cannot have intentionality; they cannot have experiences which are about something.&lt;br /&gt;
&lt;br /&gt;
Searle: Minds, Brains, and Programs&lt;br /&gt;
&lt;br /&gt;
==3.	Wednesday, May 6 (09:30 - 12:15) AI and the Theory of Dynamic and Complex Systems==&lt;br /&gt;
&lt;br /&gt;
==4. Thursday, May 7 (09:30 - 12:15) AI and History, Geography and Law==&lt;br /&gt;
&lt;br /&gt;
==5.	Friday, May 8 (13:30 - 16:15) The Impossibility of Artificial General Intelligence==&lt;br /&gt;
Personal knowledge&lt;br /&gt;
&lt;br /&gt;
Explicit, implicit, practical, personal and tacit knowledge&lt;br /&gt;
Video&lt;br /&gt;
Knowing how vs Knowing that&lt;br /&gt;
Personal knowledge and science&lt;br /&gt;
Creativity&lt;br /&gt;
Empathy&lt;br /&gt;
Entrepreneurship&lt;br /&gt;
Leadership and control (and ruling the world)&lt;br /&gt;
Complex Systems and Cognitive Science: Why the Replication Problem is here to stay&lt;br /&gt;
&lt;br /&gt;
The &#039;replication problem&#039; is the the inability of scientific communities to independently confirm the results of scientific work. Much has been written on this problem especially as it arises in (social) psychology, and on potential solutions under the heading of &#039;open science&#039;. But we will see that the replication problem has plagued medicine as a positive science since its beginnings (Virchov and Pasteur). This problem has become worse over the last 30 years and has massive consequences for healthcare practice and policy.&lt;br /&gt;
Slides&lt;/div&gt;</summary>
		<author><name>Phismith</name></author>
	</entry>
	<entry>
		<id>https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75533</id>
		<title>Philosophy and Artificial Intelligence 2026</title>
		<link rel="alternate" type="text/html" href="https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75533"/>
		<updated>2026-02-20T11:40:23Z</updated>

		<summary type="html">&lt;p&gt;Phismith: /* 5.	Friday, May 8 (13:30 - 16:15) Are we living in a simulation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&#039;&#039;&#039;Philosophy and Artificial Intelligence 2026&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe and Barry Smith&lt;br /&gt;
&lt;br /&gt;
MAP, USI, Lugano, Spring 2026&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;DRAFT VERSION&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
:Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction&lt;br /&gt;
:Tuesday May 5 (09:30-12:15) Transhumanism: The Ultimate Stage of Cartesianism&lt;br /&gt;
:Wednesday, May 6 (09:30 - 12:15) AI and the Theory of Dynamic and Complex Systems&lt;br /&gt;
:Thursday, May 7 20 (09:30 - 12:15) AI and History, Geography and Law&lt;br /&gt;
:Friday, May 8 (13:30 - 16:15) The Impossibility of Artificial General Intelligence &lt;br /&gt;
:Tuesday, May 12 (09:30 - 12:15) AI and Creativity&lt;br /&gt;
:Wednesday, May 13 (09:30 - 12:15) Artificial Intelligence and Human Intelligence; The Limits of AI and the Limits of Physics, Part 1&lt;br /&gt;
:Friday, May 15 (09:30-12:15) The Limits of AI and the Limits of Physics, Part 2&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Introduction&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Artificial Intelligence (AI) is the subfield of Computer Science devoted to developing programs that enable computers to display behavior that can (broadly) be characterized as intelligent. On the strong version, the ultimate goal of AI is to create what is called General Artificial Intelligence (AGI), by which is meant an artificial system that is as intelligent as a human being.&lt;br /&gt;
&lt;br /&gt;
Since its inception in the middle of the last century AI has enjoyed repeated cycles of enthusiasm and disappointment (AI summers and winters). Recent successes of ChatGPT and other Large Language Models (LLMs) have opened a new era of popularization of AI. For the first time, AI tools have been created which are immediately available to the wider population, who for the first time can have real hands-on experience of what AI can do.&lt;br /&gt;
&lt;br /&gt;
These developments in AI open up a series of questions such as:&lt;br /&gt;
&lt;br /&gt;
:Will the powers of AI continue to grow in the future, and if so will they ever reach the point where they can be said to have intelligence equivalent to or greater than that of a human being?&lt;br /&gt;
:Could we ever reach the point where we can accept the thesis that an AI system could have something like consciousness or sentience?&lt;br /&gt;
:Could we reach the point where an AI system could be said to behave ethically, or to have responsibility for its actions?&lt;br /&gt;
:Can quantum computers enable a stronger AI than what we have today?&lt;br /&gt;
:Can a computer have desires, a will, and emotions?&lt;br /&gt;
:Can a computer have responsibility for its behavior?&lt;br /&gt;
:Could a machine have something like a personal identity? Would I really survive if the contents of my brain were uploaded to the cloud?&lt;br /&gt;
&lt;br /&gt;
We will describe in detail how stochastic AI works, and consider these and a series of other questions at the borderlines of philosophy and AI. The class will close with presentations of papers on relevant topics given by students.&lt;br /&gt;
&lt;br /&gt;
Some of the material for this class is derived from our book&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Why Machines Will Never Rule the World: Artificial Intelligence without Fear&#039;&#039; (2nd edition, Routledge 2025).&lt;br /&gt;
and from the companion volume:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Symposium on Why Machines Will Never Rule the World&#039;&#039; — Guest editor, Janna Hastings, University of Zurich&lt;br /&gt;
which appeared as a special issue of the public access journal Cosmos + Taxis in early 2024.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Faculty&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe is the founder and CEO of Cognotekt, GmBH, an AI company based in Cologne specialised in the design and implementation of holistic AI solutions. He has 20 years experience in the AI field, 8 years as a management consultant and software architect. He has also worked as a physician and mathematician, and he views AI itself -- to the extent that it is not an elaborate hype -- as a branch of applied mathematics. CUrrently his primary focus is in the biomathematics of cancer.&lt;br /&gt;
&lt;br /&gt;
Barry Smith is one of the world&#039;s most widely cited philosophers. He has contributed primarily to the field of applied ontology, which means applying philosophical ideas derived from analytical metaphysics to the concrete practical problems which arise where attempts are made to compare or combine heterogeneous bodies of data.&lt;br /&gt;
&lt;br /&gt;
Grading&lt;br /&gt;
&lt;br /&gt;
Essay with presentation: 70%&lt;br /&gt;
Essay with no presentation: 85%&lt;br /&gt;
Presentation: 15%&lt;br /&gt;
Class Participation 15%&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Draft Schedule&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Program&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==1. Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction==&lt;br /&gt;
Part 1: An introduction to the course: Impact of philosophy on AI; impact of AI on philosophy&lt;br /&gt;
&lt;br /&gt;
The types of AI&lt;br /&gt;
&lt;br /&gt;
Deterministic AI&lt;br /&gt;
Good old fashioned AI (GOFAI)&lt;br /&gt;
Basic stochastic AI&lt;br /&gt;
How regression works&lt;br /&gt;
Advanced stochastic AI&lt;br /&gt;
Neural networks and deep learning&lt;br /&gt;
Hybrid&lt;br /&gt;
Neurosymbolic AI&lt;br /&gt;
Background&lt;br /&gt;
Why machines will never rule the world, chapter 7 (chapter 8 of 2nd edition)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
Video&lt;br /&gt;
Why Machines Will Never Rule the World&lt;br /&gt;
Part 2: What are the essential marks of human intelligence?&lt;br /&gt;
&lt;br /&gt;
The classical psychological definitions of intelligence are:  &lt;br /&gt;
&lt;br /&gt;
A. the ability to adapt to new situations (applies both to humans and to animals) &lt;br /&gt;
B. a very general mental capability (possessed only by humans) that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience &lt;br /&gt;
Can a machine be intelligent in either of these senses?&lt;br /&gt;
&lt;br /&gt;
Slides on IQ tests&lt;br /&gt;
Readings:&lt;br /&gt;
&lt;br /&gt;
Linda S. Gottfredson. Mainstream Science on Intelligence. In: Intelligence 24 (1997), pp. 13–23.&lt;br /&gt;
Jobst Landgrebe and Barry Smith: There is no Artificial General Intelligence&lt;br /&gt;
Jobst Landgrebe: Deep reasoning, abstraction and planning&lt;br /&gt;
Background: Ersatz Definitions, Anthropomorphisms, and Pareidolia&lt;br /&gt;
&lt;br /&gt;
There&#039;s no &#039;I&#039; in &#039;AI&#039;, Steven Pemberton, Amsterdam, December 12, 2024&lt;br /&gt;
1. Esatz definitions: using words like &#039;thinks&#039; as in &#039;the machine is thinking&#039;, but with meanings quite different from those we use when talking about human beings. As when we define &#039;flying&#039; as moving through the air, and then jumping up and down and saying &amp;quot;look, I&#039;m flying!&amp;quot;&lt;br /&gt;
2. Pareidolia: a psychological phenomenon that causes people to see patterns, objects, or meaning in ambiguous or unrelated stimuli&lt;br /&gt;
3. If you can&#039;t spot irony, you&#039;re not intelligent&lt;br /&gt;
Tuesday February 18 (09:30-12:15) Limits and Dangers of AI?&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the technical fundamentals of AI: Methods, mathematics, usage as well&lt;br /&gt;
&lt;br /&gt;
2. Outlines the theory of complex systems documented in our book&lt;br /&gt;
&lt;br /&gt;
3. Shows why AI cannot model complex systems adequately and synoptically, and why they therefore cannot reach a level of intelligence equal to that of human beings.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Will AI Destroy Humanity? A Soho Forum Debate (Spoiler: Jobst won)&lt;br /&gt;
&lt;br /&gt;
R.V. Yampolskii, AI: Unexplainable, Unpredictable, Uncontrollable&lt;br /&gt;
&lt;br /&gt;
Arvind Narayanan and Sayash Kapoor, AI Snake Oil&lt;br /&gt;
&lt;br /&gt;
Arnold Schelsky, The Hype Book, especially Chapter 1.&lt;br /&gt;
&lt;br /&gt;
==2	Tuesday May 5 (09:30-12:15) Transhumanism: The Ultimate Stage of Cartesianism ==&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the full spectrum of transhumanism and its cultural origins.&lt;br /&gt;
&lt;br /&gt;
2. Debunk the feasibility of radically improving human beings via technology.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
TESCREALISM, or: why AI gods are so passionate about creating Artificial General Intelligence&lt;br /&gt;
Considering the existential risk of Artificial Superintelligence&lt;br /&gt;
Thursday, February 20 (9:30 - 12:15) Can a machine be conscious?&lt;br /&gt;
The machine will&lt;br /&gt;
&lt;br /&gt;
Computers cannot have a will, because computers don&#039;t give a damn. Therefore there can be no machine ethics&lt;br /&gt;
&lt;br /&gt;
The lack of the giving-a-damn-factor is taken by Yann LeCun as a reason to reject the idea that AI might pose an existential risk to humanity – an AI will have no desire for self-preservation “Almost half of CEOs fear A.I. could destroy humanity five to 10 years from now — but ‘A.I. godfather&#039; says an existential threat is ‘preposterously ridiculous’” Fortune, June 15, 2023. See also here.&lt;br /&gt;
Implications of the absence of a machine will:&lt;br /&gt;
&lt;br /&gt;
The problem of the singularity (when machines will take over from humans) will not arise&lt;br /&gt;
The idea of digital immortality will never be realized Slides&lt;br /&gt;
The idea that human beings are simulations can be rejected&lt;br /&gt;
There can be no AI ethics (only: ethics governing human beings when they use AI)&lt;br /&gt;
Fermi&#039;s paradox is solved&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Searle&#039;s Chinese Room Argument&lt;br /&gt;
Machines cannot have intentionality; they cannot have experiences which are about something.&lt;br /&gt;
&lt;br /&gt;
Searle: Minds, Brains, and Programs&lt;br /&gt;
&lt;br /&gt;
==3.	Wednesday, May 6 (09:30 - 12:15) AI and the Theory of Dynamic and Complex Systems==&lt;br /&gt;
&lt;br /&gt;
==4. Thursday, May 7 (09:30 - 12:15) AI and History, Geography and Law==&lt;br /&gt;
&lt;br /&gt;
==5.	Friday, May 8 (13:30 - 16:15) The Impossibility of Artificial General Intelligence==&lt;br /&gt;
Personal knowledge&lt;br /&gt;
&lt;br /&gt;
Explicit, implicit, practical, personal and tacit knowledge&lt;br /&gt;
Video&lt;br /&gt;
Knowing how vs Knowing that&lt;br /&gt;
Personal knowledge and science&lt;br /&gt;
Creativity&lt;br /&gt;
Empathy&lt;br /&gt;
Entrepreneurship&lt;br /&gt;
Leadership and control (and ruling the world)&lt;br /&gt;
Complex Systems and Cognitive Science: Why the Replication Problem is here to stay&lt;br /&gt;
&lt;br /&gt;
The &#039;replication problem&#039; is the the inability of scientific communities to independently confirm the results of scientific work. Much has been written on this problem especially as it arises in (social) psychology, and on potential solutions under the heading of &#039;open science&#039;. But we will see that the replication problem has plagued medicine as a positive science since its beginnings (Virchov and Pasteur). This problem has become worse over the last 30 years and has massive consequences for healthcare practice and policy.&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
==Friday May 2 (13:30-16:30) Are We Living in a Simulation?==&lt;br /&gt;
Are we living in a simulation?, Slides&lt;br /&gt;
Video&lt;br /&gt;
The Future of Artificial Intelligence, Slides&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Background Material&lt;br /&gt;
An Introduction to AI for Philosophers&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
Slides&lt;br /&gt;
(AI experts are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
An Introduction to Philosophy for Computer Scientists&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
Slides&lt;br /&gt;
(Philosophers are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
John McCarthy, &amp;quot;What has AI in common with philosophy?&amp;quot;&lt;/div&gt;</summary>
		<author><name>Phismith</name></author>
	</entry>
	<entry>
		<id>https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75532</id>
		<title>Philosophy and Artificial Intelligence 2026</title>
		<link rel="alternate" type="text/html" href="https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75532"/>
		<updated>2026-02-20T11:40:00Z</updated>

		<summary type="html">&lt;p&gt;Phismith: /* 1. Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&#039;&#039;&#039;Philosophy and Artificial Intelligence 2026&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe and Barry Smith&lt;br /&gt;
&lt;br /&gt;
MAP, USI, Lugano, Spring 2026&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;DRAFT VERSION&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
:Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction&lt;br /&gt;
:Tuesday May 5 (09:30-12:15) Transhumanism: The Ultimate Stage of Cartesianism&lt;br /&gt;
:Wednesday, May 6 (09:30 - 12:15) AI and the Theory of Dynamic and Complex Systems&lt;br /&gt;
:Thursday, May 7 20 (09:30 - 12:15) AI and History, Geography and Law&lt;br /&gt;
:Friday, May 8 (13:30 - 16:15) The Impossibility of Artificial General Intelligence &lt;br /&gt;
:Tuesday, May 12 (09:30 - 12:15) AI and Creativity&lt;br /&gt;
:Wednesday, May 13 (09:30 - 12:15) Artificial Intelligence and Human Intelligence; The Limits of AI and the Limits of Physics, Part 1&lt;br /&gt;
:Friday, May 15 (09:30-12:15) The Limits of AI and the Limits of Physics, Part 2&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Introduction&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Artificial Intelligence (AI) is the subfield of Computer Science devoted to developing programs that enable computers to display behavior that can (broadly) be characterized as intelligent. On the strong version, the ultimate goal of AI is to create what is called General Artificial Intelligence (AGI), by which is meant an artificial system that is as intelligent as a human being.&lt;br /&gt;
&lt;br /&gt;
Since its inception in the middle of the last century AI has enjoyed repeated cycles of enthusiasm and disappointment (AI summers and winters). Recent successes of ChatGPT and other Large Language Models (LLMs) have opened a new era of popularization of AI. For the first time, AI tools have been created which are immediately available to the wider population, who for the first time can have real hands-on experience of what AI can do.&lt;br /&gt;
&lt;br /&gt;
These developments in AI open up a series of questions such as:&lt;br /&gt;
&lt;br /&gt;
:Will the powers of AI continue to grow in the future, and if so will they ever reach the point where they can be said to have intelligence equivalent to or greater than that of a human being?&lt;br /&gt;
:Could we ever reach the point where we can accept the thesis that an AI system could have something like consciousness or sentience?&lt;br /&gt;
:Could we reach the point where an AI system could be said to behave ethically, or to have responsibility for its actions?&lt;br /&gt;
:Can quantum computers enable a stronger AI than what we have today?&lt;br /&gt;
:Can a computer have desires, a will, and emotions?&lt;br /&gt;
:Can a computer have responsibility for its behavior?&lt;br /&gt;
:Could a machine have something like a personal identity? Would I really survive if the contents of my brain were uploaded to the cloud?&lt;br /&gt;
&lt;br /&gt;
We will describe in detail how stochastic AI works, and consider these and a series of other questions at the borderlines of philosophy and AI. The class will close with presentations of papers on relevant topics given by students.&lt;br /&gt;
&lt;br /&gt;
Some of the material for this class is derived from our book&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Why Machines Will Never Rule the World: Artificial Intelligence without Fear&#039;&#039; (2nd edition, Routledge 2025).&lt;br /&gt;
and from the companion volume:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Symposium on Why Machines Will Never Rule the World&#039;&#039; — Guest editor, Janna Hastings, University of Zurich&lt;br /&gt;
which appeared as a special issue of the public access journal Cosmos + Taxis in early 2024.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Faculty&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe is the founder and CEO of Cognotekt, GmBH, an AI company based in Cologne specialised in the design and implementation of holistic AI solutions. He has 20 years experience in the AI field, 8 years as a management consultant and software architect. He has also worked as a physician and mathematician, and he views AI itself -- to the extent that it is not an elaborate hype -- as a branch of applied mathematics. CUrrently his primary focus is in the biomathematics of cancer.&lt;br /&gt;
&lt;br /&gt;
Barry Smith is one of the world&#039;s most widely cited philosophers. He has contributed primarily to the field of applied ontology, which means applying philosophical ideas derived from analytical metaphysics to the concrete practical problems which arise where attempts are made to compare or combine heterogeneous bodies of data.&lt;br /&gt;
&lt;br /&gt;
Grading&lt;br /&gt;
&lt;br /&gt;
Essay with presentation: 70%&lt;br /&gt;
Essay with no presentation: 85%&lt;br /&gt;
Presentation: 15%&lt;br /&gt;
Class Participation 15%&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Draft Schedule&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Program&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==1. Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction==&lt;br /&gt;
Part 1: An introduction to the course: Impact of philosophy on AI; impact of AI on philosophy&lt;br /&gt;
&lt;br /&gt;
The types of AI&lt;br /&gt;
&lt;br /&gt;
Deterministic AI&lt;br /&gt;
Good old fashioned AI (GOFAI)&lt;br /&gt;
Basic stochastic AI&lt;br /&gt;
How regression works&lt;br /&gt;
Advanced stochastic AI&lt;br /&gt;
Neural networks and deep learning&lt;br /&gt;
Hybrid&lt;br /&gt;
Neurosymbolic AI&lt;br /&gt;
Background&lt;br /&gt;
Why machines will never rule the world, chapter 7 (chapter 8 of 2nd edition)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
Video&lt;br /&gt;
Why Machines Will Never Rule the World&lt;br /&gt;
Part 2: What are the essential marks of human intelligence?&lt;br /&gt;
&lt;br /&gt;
The classical psychological definitions of intelligence are:  &lt;br /&gt;
&lt;br /&gt;
A. the ability to adapt to new situations (applies both to humans and to animals) &lt;br /&gt;
B. a very general mental capability (possessed only by humans) that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience &lt;br /&gt;
Can a machine be intelligent in either of these senses?&lt;br /&gt;
&lt;br /&gt;
Slides on IQ tests&lt;br /&gt;
Readings:&lt;br /&gt;
&lt;br /&gt;
Linda S. Gottfredson. Mainstream Science on Intelligence. In: Intelligence 24 (1997), pp. 13–23.&lt;br /&gt;
Jobst Landgrebe and Barry Smith: There is no Artificial General Intelligence&lt;br /&gt;
Jobst Landgrebe: Deep reasoning, abstraction and planning&lt;br /&gt;
Background: Ersatz Definitions, Anthropomorphisms, and Pareidolia&lt;br /&gt;
&lt;br /&gt;
There&#039;s no &#039;I&#039; in &#039;AI&#039;, Steven Pemberton, Amsterdam, December 12, 2024&lt;br /&gt;
1. Esatz definitions: using words like &#039;thinks&#039; as in &#039;the machine is thinking&#039;, but with meanings quite different from those we use when talking about human beings. As when we define &#039;flying&#039; as moving through the air, and then jumping up and down and saying &amp;quot;look, I&#039;m flying!&amp;quot;&lt;br /&gt;
2. Pareidolia: a psychological phenomenon that causes people to see patterns, objects, or meaning in ambiguous or unrelated stimuli&lt;br /&gt;
3. If you can&#039;t spot irony, you&#039;re not intelligent&lt;br /&gt;
Tuesday February 18 (09:30-12:15) Limits and Dangers of AI?&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the technical fundamentals of AI: Methods, mathematics, usage as well&lt;br /&gt;
&lt;br /&gt;
2. Outlines the theory of complex systems documented in our book&lt;br /&gt;
&lt;br /&gt;
3. Shows why AI cannot model complex systems adequately and synoptically, and why they therefore cannot reach a level of intelligence equal to that of human beings.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Will AI Destroy Humanity? A Soho Forum Debate (Spoiler: Jobst won)&lt;br /&gt;
&lt;br /&gt;
R.V. Yampolskii, AI: Unexplainable, Unpredictable, Uncontrollable&lt;br /&gt;
&lt;br /&gt;
Arvind Narayanan and Sayash Kapoor, AI Snake Oil&lt;br /&gt;
&lt;br /&gt;
Arnold Schelsky, The Hype Book, especially Chapter 1.&lt;br /&gt;
&lt;br /&gt;
==2	Tuesday May 5 (09:30-12:15) Transhumanism: The Ultimate Stage of Cartesianism ==&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the full spectrum of transhumanism and its cultural origins.&lt;br /&gt;
&lt;br /&gt;
2. Debunk the feasibility of radically improving human beings via technology.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
TESCREALISM, or: why AI gods are so passionate about creating Artificial General Intelligence&lt;br /&gt;
Considering the existential risk of Artificial Superintelligence&lt;br /&gt;
Thursday, February 20 (9:30 - 12:15) Can a machine be conscious?&lt;br /&gt;
The machine will&lt;br /&gt;
&lt;br /&gt;
Computers cannot have a will, because computers don&#039;t give a damn. Therefore there can be no machine ethics&lt;br /&gt;
&lt;br /&gt;
The lack of the giving-a-damn-factor is taken by Yann LeCun as a reason to reject the idea that AI might pose an existential risk to humanity – an AI will have no desire for self-preservation “Almost half of CEOs fear A.I. could destroy humanity five to 10 years from now — but ‘A.I. godfather&#039; says an existential threat is ‘preposterously ridiculous’” Fortune, June 15, 2023. See also here.&lt;br /&gt;
Implications of the absence of a machine will:&lt;br /&gt;
&lt;br /&gt;
The problem of the singularity (when machines will take over from humans) will not arise&lt;br /&gt;
The idea of digital immortality will never be realized Slides&lt;br /&gt;
The idea that human beings are simulations can be rejected&lt;br /&gt;
There can be no AI ethics (only: ethics governing human beings when they use AI)&lt;br /&gt;
Fermi&#039;s paradox is solved&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Searle&#039;s Chinese Room Argument&lt;br /&gt;
Machines cannot have intentionality; they cannot have experiences which are about something.&lt;br /&gt;
&lt;br /&gt;
Searle: Minds, Brains, and Programs&lt;br /&gt;
&lt;br /&gt;
==3.	Wednesday, May 6 (09:30 - 12:15) AI and the Theory of Dynamic and Complex Systems==&lt;br /&gt;
&lt;br /&gt;
==4. Thursday, May 7 (09:30 - 12:15) AI and History, Geography and Law==&lt;br /&gt;
&lt;br /&gt;
==5.	Friday, May 8 (13:30 - 16:15) Are we living in a simulation==&lt;br /&gt;
Personal knowledge&lt;br /&gt;
&lt;br /&gt;
Explicit, implicit, practical, personal and tacit knowledge&lt;br /&gt;
Video&lt;br /&gt;
Knowing how vs Knowing that&lt;br /&gt;
Personal knowledge and science&lt;br /&gt;
Creativity&lt;br /&gt;
Empathy&lt;br /&gt;
Entrepreneurship&lt;br /&gt;
Leadership and control (and ruling the world)&lt;br /&gt;
Complex Systems and Cognitive Science: Why the Replication Problem is here to stay&lt;br /&gt;
&lt;br /&gt;
The &#039;replication problem&#039; is the the inability of scientific communities to independently confirm the results of scientific work. Much has been written on this problem especially as it arises in (social) psychology, and on potential solutions under the heading of &#039;open science&#039;. But we will see that the replication problem has plagued medicine as a positive science since its beginnings (Virchov and Pasteur). This problem has become worse over the last 30 years and has massive consequences for healthcare practice and policy.&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
==Friday May 2 (13:30-16:30) Are We Living in a Simulation?==&lt;br /&gt;
Are we living in a simulation?, Slides&lt;br /&gt;
Video&lt;br /&gt;
The Future of Artificial Intelligence, Slides&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Background Material&lt;br /&gt;
An Introduction to AI for Philosophers&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
Slides&lt;br /&gt;
(AI experts are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
An Introduction to Philosophy for Computer Scientists&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
Slides&lt;br /&gt;
(Philosophers are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
John McCarthy, &amp;quot;What has AI in common with philosophy?&amp;quot;&lt;/div&gt;</summary>
		<author><name>Phismith</name></author>
	</entry>
	<entry>
		<id>https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75531</id>
		<title>Philosophy and Artificial Intelligence 2026</title>
		<link rel="alternate" type="text/html" href="https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75531"/>
		<updated>2026-02-20T11:39:35Z</updated>

		<summary type="html">&lt;p&gt;Phismith: /* 4. Thursday, May 7 (09:30 - 12:15) AI and History, Geography and Law */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&#039;&#039;&#039;Philosophy and Artificial Intelligence 2026&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe and Barry Smith&lt;br /&gt;
&lt;br /&gt;
MAP, USI, Lugano, Spring 2026&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;DRAFT VERSION&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
:Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction&lt;br /&gt;
:Tuesday May 5 (09:30-12:15) Transhumanism: The Ultimate Stage of Cartesianism&lt;br /&gt;
:Wednesday, May 6 (09:30 - 12:15) AI and the Theory of Dynamic and Complex Systems&lt;br /&gt;
:Thursday, May 7 20 (09:30 - 12:15) AI and History, Geography and Law&lt;br /&gt;
:Friday, May 8 (13:30 - 16:15) The Impossibility of Artificial General Intelligence &lt;br /&gt;
:Tuesday, May 12 (09:30 - 12:15) AI and Creativity&lt;br /&gt;
:Wednesday, May 13 (09:30 - 12:15) Artificial Intelligence and Human Intelligence; The Limits of AI and the Limits of Physics, Part 1&lt;br /&gt;
:Friday, May 15 (09:30-12:15) The Limits of AI and the Limits of Physics, Part 2&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Introduction&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Artificial Intelligence (AI) is the subfield of Computer Science devoted to developing programs that enable computers to display behavior that can (broadly) be characterized as intelligent. On the strong version, the ultimate goal of AI is to create what is called General Artificial Intelligence (AGI), by which is meant an artificial system that is as intelligent as a human being.&lt;br /&gt;
&lt;br /&gt;
Since its inception in the middle of the last century AI has enjoyed repeated cycles of enthusiasm and disappointment (AI summers and winters). Recent successes of ChatGPT and other Large Language Models (LLMs) have opened a new era of popularization of AI. For the first time, AI tools have been created which are immediately available to the wider population, who for the first time can have real hands-on experience of what AI can do.&lt;br /&gt;
&lt;br /&gt;
These developments in AI open up a series of questions such as:&lt;br /&gt;
&lt;br /&gt;
:Will the powers of AI continue to grow in the future, and if so will they ever reach the point where they can be said to have intelligence equivalent to or greater than that of a human being?&lt;br /&gt;
:Could we ever reach the point where we can accept the thesis that an AI system could have something like consciousness or sentience?&lt;br /&gt;
:Could we reach the point where an AI system could be said to behave ethically, or to have responsibility for its actions?&lt;br /&gt;
:Can quantum computers enable a stronger AI than what we have today?&lt;br /&gt;
:Can a computer have desires, a will, and emotions?&lt;br /&gt;
:Can a computer have responsibility for its behavior?&lt;br /&gt;
:Could a machine have something like a personal identity? Would I really survive if the contents of my brain were uploaded to the cloud?&lt;br /&gt;
&lt;br /&gt;
We will describe in detail how stochastic AI works, and consider these and a series of other questions at the borderlines of philosophy and AI. The class will close with presentations of papers on relevant topics given by students.&lt;br /&gt;
&lt;br /&gt;
Some of the material for this class is derived from our book&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Why Machines Will Never Rule the World: Artificial Intelligence without Fear&#039;&#039; (2nd edition, Routledge 2025).&lt;br /&gt;
and from the companion volume:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Symposium on Why Machines Will Never Rule the World&#039;&#039; — Guest editor, Janna Hastings, University of Zurich&lt;br /&gt;
which appeared as a special issue of the public access journal Cosmos + Taxis in early 2024.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Faculty&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe is the founder and CEO of Cognotekt, GmBH, an AI company based in Cologne specialised in the design and implementation of holistic AI solutions. He has 20 years experience in the AI field, 8 years as a management consultant and software architect. He has also worked as a physician and mathematician, and he views AI itself -- to the extent that it is not an elaborate hype -- as a branch of applied mathematics. CUrrently his primary focus is in the biomathematics of cancer.&lt;br /&gt;
&lt;br /&gt;
Barry Smith is one of the world&#039;s most widely cited philosophers. He has contributed primarily to the field of applied ontology, which means applying philosophical ideas derived from analytical metaphysics to the concrete practical problems which arise where attempts are made to compare or combine heterogeneous bodies of data.&lt;br /&gt;
&lt;br /&gt;
Grading&lt;br /&gt;
&lt;br /&gt;
Essay with presentation: 70%&lt;br /&gt;
Essay with no presentation: 85%&lt;br /&gt;
Presentation: 15%&lt;br /&gt;
Class Participation 15%&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Draft Schedule&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Program&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==1. Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction==&lt;br /&gt;
Part 1: An introduction to the course: Impact of philosophy on AI; impact of AI on philosophy&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
Video&lt;br /&gt;
Why Machines Will Never Rule the World&lt;br /&gt;
Part 2: What are the essential marks of human intelligence?&lt;br /&gt;
&lt;br /&gt;
The classical psychological definitions of intelligence are:  &lt;br /&gt;
&lt;br /&gt;
A. the ability to adapt to new situations (applies both to humans and to animals) &lt;br /&gt;
B. a very general mental capability (possessed only by humans) that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience &lt;br /&gt;
Can a machine be intelligent in either of these senses?&lt;br /&gt;
&lt;br /&gt;
Slides on IQ tests&lt;br /&gt;
Readings:&lt;br /&gt;
&lt;br /&gt;
Linda S. Gottfredson. Mainstream Science on Intelligence. In: Intelligence 24 (1997), pp. 13–23.&lt;br /&gt;
Jobst Landgrebe and Barry Smith: There is no Artificial General Intelligence&lt;br /&gt;
Jobst Landgrebe: Deep reasoning, abstraction and planning&lt;br /&gt;
Background: Ersatz Definitions, Anthropomorphisms, and Pareidolia&lt;br /&gt;
&lt;br /&gt;
There&#039;s no &#039;I&#039; in &#039;AI&#039;, Steven Pemberton, Amsterdam, December 12, 2024&lt;br /&gt;
1. Esatz definitions: using words like &#039;thinks&#039; as in &#039;the machine is thinking&#039;, but with meanings quite different from those we use when talking about human beings. As when we define &#039;flying&#039; as moving through the air, and then jumping up and down and saying &amp;quot;look, I&#039;m flying!&amp;quot;&lt;br /&gt;
2. Pareidolia: a psychological phenomenon that causes people to see patterns, objects, or meaning in ambiguous or unrelated stimuli&lt;br /&gt;
3. If you can&#039;t spot irony, you&#039;re not intelligent&lt;br /&gt;
Tuesday February 18 (09:30-12:15) Limits and Dangers of AI?&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the technical fundamentals of AI: Methods, mathematics, usage as well&lt;br /&gt;
&lt;br /&gt;
2. Outlines the theory of complex systems documented in our book&lt;br /&gt;
&lt;br /&gt;
3. Shows why AI cannot model complex systems adequately and synoptically, and why they therefore cannot reach a level of intelligence equal to that of human beings.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Will AI Destroy Humanity? A Soho Forum Debate (Spoiler: Jobst won)&lt;br /&gt;
&lt;br /&gt;
R.V. Yampolskii, AI: Unexplainable, Unpredictable, Uncontrollable&lt;br /&gt;
&lt;br /&gt;
Arvind Narayanan and Sayash Kapoor, AI Snake Oil&lt;br /&gt;
&lt;br /&gt;
Arnold Schelsky, The Hype Book, especially Chapter 1.&lt;br /&gt;
&lt;br /&gt;
==2	Tuesday May 5 (09:30-12:15) Transhumanism: The Ultimate Stage of Cartesianism ==&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the full spectrum of transhumanism and its cultural origins.&lt;br /&gt;
&lt;br /&gt;
2. Debunk the feasibility of radically improving human beings via technology.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
TESCREALISM, or: why AI gods are so passionate about creating Artificial General Intelligence&lt;br /&gt;
Considering the existential risk of Artificial Superintelligence&lt;br /&gt;
Thursday, February 20 (9:30 - 12:15) Can a machine be conscious?&lt;br /&gt;
The machine will&lt;br /&gt;
&lt;br /&gt;
Computers cannot have a will, because computers don&#039;t give a damn. Therefore there can be no machine ethics&lt;br /&gt;
&lt;br /&gt;
The lack of the giving-a-damn-factor is taken by Yann LeCun as a reason to reject the idea that AI might pose an existential risk to humanity – an AI will have no desire for self-preservation “Almost half of CEOs fear A.I. could destroy humanity five to 10 years from now — but ‘A.I. godfather&#039; says an existential threat is ‘preposterously ridiculous’” Fortune, June 15, 2023. See also here.&lt;br /&gt;
Implications of the absence of a machine will:&lt;br /&gt;
&lt;br /&gt;
The problem of the singularity (when machines will take over from humans) will not arise&lt;br /&gt;
The idea of digital immortality will never be realized Slides&lt;br /&gt;
The idea that human beings are simulations can be rejected&lt;br /&gt;
There can be no AI ethics (only: ethics governing human beings when they use AI)&lt;br /&gt;
Fermi&#039;s paradox is solved&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Searle&#039;s Chinese Room Argument&lt;br /&gt;
Machines cannot have intentionality; they cannot have experiences which are about something.&lt;br /&gt;
&lt;br /&gt;
Searle: Minds, Brains, and Programs&lt;br /&gt;
&lt;br /&gt;
==3.	Wednesday, May 6 (09:30 - 12:15) AI and the Theory of Dynamic and Complex Systems==&lt;br /&gt;
&lt;br /&gt;
==4. Thursday, May 7 (09:30 - 12:15) AI and History, Geography and Law==&lt;br /&gt;
&lt;br /&gt;
==5.	Friday, May 8 (13:30 - 16:15) Are we living in a simulation==&lt;br /&gt;
Personal knowledge&lt;br /&gt;
&lt;br /&gt;
Explicit, implicit, practical, personal and tacit knowledge&lt;br /&gt;
Video&lt;br /&gt;
Knowing how vs Knowing that&lt;br /&gt;
Personal knowledge and science&lt;br /&gt;
Creativity&lt;br /&gt;
Empathy&lt;br /&gt;
Entrepreneurship&lt;br /&gt;
Leadership and control (and ruling the world)&lt;br /&gt;
Complex Systems and Cognitive Science: Why the Replication Problem is here to stay&lt;br /&gt;
&lt;br /&gt;
The &#039;replication problem&#039; is the the inability of scientific communities to independently confirm the results of scientific work. Much has been written on this problem especially as it arises in (social) psychology, and on potential solutions under the heading of &#039;open science&#039;. But we will see that the replication problem has plagued medicine as a positive science since its beginnings (Virchov and Pasteur). This problem has become worse over the last 30 years and has massive consequences for healthcare practice and policy.&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
==Friday May 2 (13:30-16:30) Are We Living in a Simulation?==&lt;br /&gt;
Are we living in a simulation?, Slides&lt;br /&gt;
Video&lt;br /&gt;
The Future of Artificial Intelligence, Slides&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Background Material&lt;br /&gt;
An Introduction to AI for Philosophers&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
Slides&lt;br /&gt;
(AI experts are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
An Introduction to Philosophy for Computer Scientists&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
Slides&lt;br /&gt;
(Philosophers are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
John McCarthy, &amp;quot;What has AI in common with philosophy?&amp;quot;&lt;/div&gt;</summary>
		<author><name>Phismith</name></author>
	</entry>
	<entry>
		<id>https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75530</id>
		<title>Philosophy and Artificial Intelligence 2026</title>
		<link rel="alternate" type="text/html" href="https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75530"/>
		<updated>2026-02-20T11:39:16Z</updated>

		<summary type="html">&lt;p&gt;Phismith: /* 4. Thursday, May 7 (09:30 - 12:15) Can a machine be conscious? An introduction to the statistical foundations of AI */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&#039;&#039;&#039;Philosophy and Artificial Intelligence 2026&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe and Barry Smith&lt;br /&gt;
&lt;br /&gt;
MAP, USI, Lugano, Spring 2026&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;DRAFT VERSION&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
:Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction&lt;br /&gt;
:Tuesday May 5 (09:30-12:15) Transhumanism: The Ultimate Stage of Cartesianism&lt;br /&gt;
:Wednesday, May 6 (09:30 - 12:15) AI and the Theory of Dynamic and Complex Systems&lt;br /&gt;
:Thursday, May 7 20 (09:30 - 12:15) AI and History, Geography and Law&lt;br /&gt;
:Friday, May 8 (13:30 - 16:15) The Impossibility of Artificial General Intelligence &lt;br /&gt;
:Tuesday, May 12 (09:30 - 12:15) AI and Creativity&lt;br /&gt;
:Wednesday, May 13 (09:30 - 12:15) Artificial Intelligence and Human Intelligence; The Limits of AI and the Limits of Physics, Part 1&lt;br /&gt;
:Friday, May 15 (09:30-12:15) The Limits of AI and the Limits of Physics, Part 2&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Introduction&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Artificial Intelligence (AI) is the subfield of Computer Science devoted to developing programs that enable computers to display behavior that can (broadly) be characterized as intelligent. On the strong version, the ultimate goal of AI is to create what is called General Artificial Intelligence (AGI), by which is meant an artificial system that is as intelligent as a human being.&lt;br /&gt;
&lt;br /&gt;
Since its inception in the middle of the last century AI has enjoyed repeated cycles of enthusiasm and disappointment (AI summers and winters). Recent successes of ChatGPT and other Large Language Models (LLMs) have opened a new era of popularization of AI. For the first time, AI tools have been created which are immediately available to the wider population, who for the first time can have real hands-on experience of what AI can do.&lt;br /&gt;
&lt;br /&gt;
These developments in AI open up a series of questions such as:&lt;br /&gt;
&lt;br /&gt;
:Will the powers of AI continue to grow in the future, and if so will they ever reach the point where they can be said to have intelligence equivalent to or greater than that of a human being?&lt;br /&gt;
:Could we ever reach the point where we can accept the thesis that an AI system could have something like consciousness or sentience?&lt;br /&gt;
:Could we reach the point where an AI system could be said to behave ethically, or to have responsibility for its actions?&lt;br /&gt;
:Can quantum computers enable a stronger AI than what we have today?&lt;br /&gt;
:Can a computer have desires, a will, and emotions?&lt;br /&gt;
:Can a computer have responsibility for its behavior?&lt;br /&gt;
:Could a machine have something like a personal identity? Would I really survive if the contents of my brain were uploaded to the cloud?&lt;br /&gt;
&lt;br /&gt;
We will describe in detail how stochastic AI works, and consider these and a series of other questions at the borderlines of philosophy and AI. The class will close with presentations of papers on relevant topics given by students.&lt;br /&gt;
&lt;br /&gt;
Some of the material for this class is derived from our book&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Why Machines Will Never Rule the World: Artificial Intelligence without Fear&#039;&#039; (2nd edition, Routledge 2025).&lt;br /&gt;
and from the companion volume:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Symposium on Why Machines Will Never Rule the World&#039;&#039; — Guest editor, Janna Hastings, University of Zurich&lt;br /&gt;
which appeared as a special issue of the public access journal Cosmos + Taxis in early 2024.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Faculty&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe is the founder and CEO of Cognotekt, GmBH, an AI company based in Cologne specialised in the design and implementation of holistic AI solutions. He has 20 years experience in the AI field, 8 years as a management consultant and software architect. He has also worked as a physician and mathematician, and he views AI itself -- to the extent that it is not an elaborate hype -- as a branch of applied mathematics. CUrrently his primary focus is in the biomathematics of cancer.&lt;br /&gt;
&lt;br /&gt;
Barry Smith is one of the world&#039;s most widely cited philosophers. He has contributed primarily to the field of applied ontology, which means applying philosophical ideas derived from analytical metaphysics to the concrete practical problems which arise where attempts are made to compare or combine heterogeneous bodies of data.&lt;br /&gt;
&lt;br /&gt;
Grading&lt;br /&gt;
&lt;br /&gt;
Essay with presentation: 70%&lt;br /&gt;
Essay with no presentation: 85%&lt;br /&gt;
Presentation: 15%&lt;br /&gt;
Class Participation 15%&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Draft Schedule&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Program&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==1. Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction==&lt;br /&gt;
Part 1: An introduction to the course: Impact of philosophy on AI; impact of AI on philosophy&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
Video&lt;br /&gt;
Why Machines Will Never Rule the World&lt;br /&gt;
Part 2: What are the essential marks of human intelligence?&lt;br /&gt;
&lt;br /&gt;
The classical psychological definitions of intelligence are:  &lt;br /&gt;
&lt;br /&gt;
A. the ability to adapt to new situations (applies both to humans and to animals) &lt;br /&gt;
B. a very general mental capability (possessed only by humans) that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience &lt;br /&gt;
Can a machine be intelligent in either of these senses?&lt;br /&gt;
&lt;br /&gt;
Slides on IQ tests&lt;br /&gt;
Readings:&lt;br /&gt;
&lt;br /&gt;
Linda S. Gottfredson. Mainstream Science on Intelligence. In: Intelligence 24 (1997), pp. 13–23.&lt;br /&gt;
Jobst Landgrebe and Barry Smith: There is no Artificial General Intelligence&lt;br /&gt;
Jobst Landgrebe: Deep reasoning, abstraction and planning&lt;br /&gt;
Background: Ersatz Definitions, Anthropomorphisms, and Pareidolia&lt;br /&gt;
&lt;br /&gt;
There&#039;s no &#039;I&#039; in &#039;AI&#039;, Steven Pemberton, Amsterdam, December 12, 2024&lt;br /&gt;
1. Esatz definitions: using words like &#039;thinks&#039; as in &#039;the machine is thinking&#039;, but with meanings quite different from those we use when talking about human beings. As when we define &#039;flying&#039; as moving through the air, and then jumping up and down and saying &amp;quot;look, I&#039;m flying!&amp;quot;&lt;br /&gt;
2. Pareidolia: a psychological phenomenon that causes people to see patterns, objects, or meaning in ambiguous or unrelated stimuli&lt;br /&gt;
3. If you can&#039;t spot irony, you&#039;re not intelligent&lt;br /&gt;
Tuesday February 18 (09:30-12:15) Limits and Dangers of AI?&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the technical fundamentals of AI: Methods, mathematics, usage as well&lt;br /&gt;
&lt;br /&gt;
2. Outlines the theory of complex systems documented in our book&lt;br /&gt;
&lt;br /&gt;
3. Shows why AI cannot model complex systems adequately and synoptically, and why they therefore cannot reach a level of intelligence equal to that of human beings.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Will AI Destroy Humanity? A Soho Forum Debate (Spoiler: Jobst won)&lt;br /&gt;
&lt;br /&gt;
R.V. Yampolskii, AI: Unexplainable, Unpredictable, Uncontrollable&lt;br /&gt;
&lt;br /&gt;
Arvind Narayanan and Sayash Kapoor, AI Snake Oil&lt;br /&gt;
&lt;br /&gt;
Arnold Schelsky, The Hype Book, especially Chapter 1.&lt;br /&gt;
&lt;br /&gt;
==2	Tuesday May 5 (09:30-12:15) Transhumanism: The Ultimate Stage of Cartesianism ==&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the full spectrum of transhumanism and its cultural origins.&lt;br /&gt;
&lt;br /&gt;
2. Debunk the feasibility of radically improving human beings via technology.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
TESCREALISM, or: why AI gods are so passionate about creating Artificial General Intelligence&lt;br /&gt;
Considering the existential risk of Artificial Superintelligence&lt;br /&gt;
Thursday, February 20 (9:30 - 12:15) Can a machine be conscious?&lt;br /&gt;
The machine will&lt;br /&gt;
&lt;br /&gt;
Computers cannot have a will, because computers don&#039;t give a damn. Therefore there can be no machine ethics&lt;br /&gt;
&lt;br /&gt;
The lack of the giving-a-damn-factor is taken by Yann LeCun as a reason to reject the idea that AI might pose an existential risk to humanity – an AI will have no desire for self-preservation “Almost half of CEOs fear A.I. could destroy humanity five to 10 years from now — but ‘A.I. godfather&#039; says an existential threat is ‘preposterously ridiculous’” Fortune, June 15, 2023. See also here.&lt;br /&gt;
Implications of the absence of a machine will:&lt;br /&gt;
&lt;br /&gt;
The problem of the singularity (when machines will take over from humans) will not arise&lt;br /&gt;
The idea of digital immortality will never be realized Slides&lt;br /&gt;
The idea that human beings are simulations can be rejected&lt;br /&gt;
There can be no AI ethics (only: ethics governing human beings when they use AI)&lt;br /&gt;
Fermi&#039;s paradox is solved&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Searle&#039;s Chinese Room Argument&lt;br /&gt;
Machines cannot have intentionality; they cannot have experiences which are about something.&lt;br /&gt;
&lt;br /&gt;
Searle: Minds, Brains, and Programs&lt;br /&gt;
&lt;br /&gt;
==3.	Wednesday, May 6 (09:30 - 12:15) AI and the Theory of Dynamic and Complex Systems==&lt;br /&gt;
&lt;br /&gt;
==4. Thursday, May 7 (09:30 - 12:15) AI and History, Geography and Law==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
The types of AI&lt;br /&gt;
&lt;br /&gt;
Deterministic AI&lt;br /&gt;
Good old fashioned AI (GOFAI)&lt;br /&gt;
Basic stochastic AI&lt;br /&gt;
How regression works&lt;br /&gt;
Advanced stochastic AI&lt;br /&gt;
Neural networks and deep learning&lt;br /&gt;
Hybrid&lt;br /&gt;
Neurosymbolic AI&lt;br /&gt;
Background&lt;br /&gt;
Why machines will never rule the world, chapter 7 (chapter 8 of 2nd edition)&lt;br /&gt;
&lt;br /&gt;
==5.	Friday, May 8 (13:30 - 16:15) Are we living in a simulation==&lt;br /&gt;
Personal knowledge&lt;br /&gt;
&lt;br /&gt;
Explicit, implicit, practical, personal and tacit knowledge&lt;br /&gt;
Video&lt;br /&gt;
Knowing how vs Knowing that&lt;br /&gt;
Personal knowledge and science&lt;br /&gt;
Creativity&lt;br /&gt;
Empathy&lt;br /&gt;
Entrepreneurship&lt;br /&gt;
Leadership and control (and ruling the world)&lt;br /&gt;
Complex Systems and Cognitive Science: Why the Replication Problem is here to stay&lt;br /&gt;
&lt;br /&gt;
The &#039;replication problem&#039; is the the inability of scientific communities to independently confirm the results of scientific work. Much has been written on this problem especially as it arises in (social) psychology, and on potential solutions under the heading of &#039;open science&#039;. But we will see that the replication problem has plagued medicine as a positive science since its beginnings (Virchov and Pasteur). This problem has become worse over the last 30 years and has massive consequences for healthcare practice and policy.&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
==Friday May 2 (13:30-16:30) Are We Living in a Simulation?==&lt;br /&gt;
Are we living in a simulation?, Slides&lt;br /&gt;
Video&lt;br /&gt;
The Future of Artificial Intelligence, Slides&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Background Material&lt;br /&gt;
An Introduction to AI for Philosophers&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
Slides&lt;br /&gt;
(AI experts are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
An Introduction to Philosophy for Computer Scientists&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
Slides&lt;br /&gt;
(Philosophers are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
John McCarthy, &amp;quot;What has AI in common with philosophy?&amp;quot;&lt;/div&gt;</summary>
		<author><name>Phismith</name></author>
	</entry>
	<entry>
		<id>https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75529</id>
		<title>Philosophy and Artificial Intelligence 2026</title>
		<link rel="alternate" type="text/html" href="https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75529"/>
		<updated>2026-02-20T11:38:41Z</updated>

		<summary type="html">&lt;p&gt;Phismith: /* 3.	Wednesday, May 6 (09:30 - 12:15) Transhumanism and digital immortality Are we living in a simulation? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&#039;&#039;&#039;Philosophy and Artificial Intelligence 2026&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe and Barry Smith&lt;br /&gt;
&lt;br /&gt;
MAP, USI, Lugano, Spring 2026&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;DRAFT VERSION&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
:Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction&lt;br /&gt;
:Tuesday May 5 (09:30-12:15) Transhumanism: The Ultimate Stage of Cartesianism&lt;br /&gt;
:Wednesday, May 6 (09:30 - 12:15) AI and the Theory of Dynamic and Complex Systems&lt;br /&gt;
:Thursday, May 7 20 (09:30 - 12:15) AI and History, Geography and Law&lt;br /&gt;
:Friday, May 8 (13:30 - 16:15) The Impossibility of Artificial General Intelligence &lt;br /&gt;
:Tuesday, May 12 (09:30 - 12:15) AI and Creativity&lt;br /&gt;
:Wednesday, May 13 (09:30 - 12:15) Artificial Intelligence and Human Intelligence; The Limits of AI and the Limits of Physics, Part 1&lt;br /&gt;
:Friday, May 15 (09:30-12:15) The Limits of AI and the Limits of Physics, Part 2&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Introduction&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Artificial Intelligence (AI) is the subfield of Computer Science devoted to developing programs that enable computers to display behavior that can (broadly) be characterized as intelligent. On the strong version, the ultimate goal of AI is to create what is called General Artificial Intelligence (AGI), by which is meant an artificial system that is as intelligent as a human being.&lt;br /&gt;
&lt;br /&gt;
Since its inception in the middle of the last century AI has enjoyed repeated cycles of enthusiasm and disappointment (AI summers and winters). Recent successes of ChatGPT and other Large Language Models (LLMs) have opened a new era of popularization of AI. For the first time, AI tools have been created which are immediately available to the wider population, who for the first time can have real hands-on experience of what AI can do.&lt;br /&gt;
&lt;br /&gt;
These developments in AI open up a series of questions such as:&lt;br /&gt;
&lt;br /&gt;
:Will the powers of AI continue to grow in the future, and if so will they ever reach the point where they can be said to have intelligence equivalent to or greater than that of a human being?&lt;br /&gt;
:Could we ever reach the point where we can accept the thesis that an AI system could have something like consciousness or sentience?&lt;br /&gt;
:Could we reach the point where an AI system could be said to behave ethically, or to have responsibility for its actions?&lt;br /&gt;
:Can quantum computers enable a stronger AI than what we have today?&lt;br /&gt;
:Can a computer have desires, a will, and emotions?&lt;br /&gt;
:Can a computer have responsibility for its behavior?&lt;br /&gt;
:Could a machine have something like a personal identity? Would I really survive if the contents of my brain were uploaded to the cloud?&lt;br /&gt;
&lt;br /&gt;
We will describe in detail how stochastic AI works, and consider these and a series of other questions at the borderlines of philosophy and AI. The class will close with presentations of papers on relevant topics given by students.&lt;br /&gt;
&lt;br /&gt;
Some of the material for this class is derived from our book&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Why Machines Will Never Rule the World: Artificial Intelligence without Fear&#039;&#039; (2nd edition, Routledge 2025).&lt;br /&gt;
and from the companion volume:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Symposium on Why Machines Will Never Rule the World&#039;&#039; — Guest editor, Janna Hastings, University of Zurich&lt;br /&gt;
which appeared as a special issue of the public access journal Cosmos + Taxis in early 2024.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Faculty&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe is the founder and CEO of Cognotekt, GmBH, an AI company based in Cologne specialised in the design and implementation of holistic AI solutions. He has 20 years experience in the AI field, 8 years as a management consultant and software architect. He has also worked as a physician and mathematician, and he views AI itself -- to the extent that it is not an elaborate hype -- as a branch of applied mathematics. CUrrently his primary focus is in the biomathematics of cancer.&lt;br /&gt;
&lt;br /&gt;
Barry Smith is one of the world&#039;s most widely cited philosophers. He has contributed primarily to the field of applied ontology, which means applying philosophical ideas derived from analytical metaphysics to the concrete practical problems which arise where attempts are made to compare or combine heterogeneous bodies of data.&lt;br /&gt;
&lt;br /&gt;
Grading&lt;br /&gt;
&lt;br /&gt;
Essay with presentation: 70%&lt;br /&gt;
Essay with no presentation: 85%&lt;br /&gt;
Presentation: 15%&lt;br /&gt;
Class Participation 15%&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Draft Schedule&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Program&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==1. Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction==&lt;br /&gt;
Part 1: An introduction to the course: Impact of philosophy on AI; impact of AI on philosophy&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
Video&lt;br /&gt;
Why Machines Will Never Rule the World&lt;br /&gt;
Part 2: What are the essential marks of human intelligence?&lt;br /&gt;
&lt;br /&gt;
The classical psychological definitions of intelligence are:  &lt;br /&gt;
&lt;br /&gt;
A. the ability to adapt to new situations (applies both to humans and to animals) &lt;br /&gt;
B. a very general mental capability (possessed only by humans) that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience &lt;br /&gt;
Can a machine be intelligent in either of these senses?&lt;br /&gt;
&lt;br /&gt;
Slides on IQ tests&lt;br /&gt;
Readings:&lt;br /&gt;
&lt;br /&gt;
Linda S. Gottfredson. Mainstream Science on Intelligence. In: Intelligence 24 (1997), pp. 13–23.&lt;br /&gt;
Jobst Landgrebe and Barry Smith: There is no Artificial General Intelligence&lt;br /&gt;
Jobst Landgrebe: Deep reasoning, abstraction and planning&lt;br /&gt;
Background: Ersatz Definitions, Anthropomorphisms, and Pareidolia&lt;br /&gt;
&lt;br /&gt;
There&#039;s no &#039;I&#039; in &#039;AI&#039;, Steven Pemberton, Amsterdam, December 12, 2024&lt;br /&gt;
1. Esatz definitions: using words like &#039;thinks&#039; as in &#039;the machine is thinking&#039;, but with meanings quite different from those we use when talking about human beings. As when we define &#039;flying&#039; as moving through the air, and then jumping up and down and saying &amp;quot;look, I&#039;m flying!&amp;quot;&lt;br /&gt;
2. Pareidolia: a psychological phenomenon that causes people to see patterns, objects, or meaning in ambiguous or unrelated stimuli&lt;br /&gt;
3. If you can&#039;t spot irony, you&#039;re not intelligent&lt;br /&gt;
Tuesday February 18 (09:30-12:15) Limits and Dangers of AI?&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the technical fundamentals of AI: Methods, mathematics, usage as well&lt;br /&gt;
&lt;br /&gt;
2. Outlines the theory of complex systems documented in our book&lt;br /&gt;
&lt;br /&gt;
3. Shows why AI cannot model complex systems adequately and synoptically, and why they therefore cannot reach a level of intelligence equal to that of human beings.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Will AI Destroy Humanity? A Soho Forum Debate (Spoiler: Jobst won)&lt;br /&gt;
&lt;br /&gt;
R.V. Yampolskii, AI: Unexplainable, Unpredictable, Uncontrollable&lt;br /&gt;
&lt;br /&gt;
Arvind Narayanan and Sayash Kapoor, AI Snake Oil&lt;br /&gt;
&lt;br /&gt;
Arnold Schelsky, The Hype Book, especially Chapter 1.&lt;br /&gt;
&lt;br /&gt;
==2	Tuesday May 5 (09:30-12:15) Transhumanism: The Ultimate Stage of Cartesianism ==&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the full spectrum of transhumanism and its cultural origins.&lt;br /&gt;
&lt;br /&gt;
2. Debunk the feasibility of radically improving human beings via technology.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
TESCREALISM, or: why AI gods are so passionate about creating Artificial General Intelligence&lt;br /&gt;
Considering the existential risk of Artificial Superintelligence&lt;br /&gt;
Thursday, February 20 (9:30 - 12:15) Can a machine be conscious?&lt;br /&gt;
The machine will&lt;br /&gt;
&lt;br /&gt;
Computers cannot have a will, because computers don&#039;t give a damn. Therefore there can be no machine ethics&lt;br /&gt;
&lt;br /&gt;
The lack of the giving-a-damn-factor is taken by Yann LeCun as a reason to reject the idea that AI might pose an existential risk to humanity – an AI will have no desire for self-preservation “Almost half of CEOs fear A.I. could destroy humanity five to 10 years from now — but ‘A.I. godfather&#039; says an existential threat is ‘preposterously ridiculous’” Fortune, June 15, 2023. See also here.&lt;br /&gt;
Implications of the absence of a machine will:&lt;br /&gt;
&lt;br /&gt;
The problem of the singularity (when machines will take over from humans) will not arise&lt;br /&gt;
The idea of digital immortality will never be realized Slides&lt;br /&gt;
The idea that human beings are simulations can be rejected&lt;br /&gt;
There can be no AI ethics (only: ethics governing human beings when they use AI)&lt;br /&gt;
Fermi&#039;s paradox is solved&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Searle&#039;s Chinese Room Argument&lt;br /&gt;
Machines cannot have intentionality; they cannot have experiences which are about something.&lt;br /&gt;
&lt;br /&gt;
Searle: Minds, Brains, and Programs&lt;br /&gt;
&lt;br /&gt;
==3.	Wednesday, May 6 (09:30 - 12:15) AI and the Theory of Dynamic and Complex Systems==&lt;br /&gt;
&lt;br /&gt;
==4. Thursday, May 7 (09:30 - 12:15) Can a machine be conscious? An introduction to the statistical foundations of AI==&lt;br /&gt;
An introduction to the statistical foundations of AI&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
The types of AI&lt;br /&gt;
&lt;br /&gt;
Deterministic AI&lt;br /&gt;
Good old fashioned AI (GOFAI)&lt;br /&gt;
Basic stochastic AI&lt;br /&gt;
How regression works&lt;br /&gt;
Advanced stochastic AI&lt;br /&gt;
Neural networks and deep learning&lt;br /&gt;
Hybrid&lt;br /&gt;
Neurosymbolic AI&lt;br /&gt;
Background&lt;br /&gt;
Why machines will never rule the world, chapter 7 (chapter 8 of 2nd edition)&lt;br /&gt;
&lt;br /&gt;
==5.	Friday, May 8 (13:30 - 16:15) Are we living in a simulation==&lt;br /&gt;
Personal knowledge&lt;br /&gt;
&lt;br /&gt;
Explicit, implicit, practical, personal and tacit knowledge&lt;br /&gt;
Video&lt;br /&gt;
Knowing how vs Knowing that&lt;br /&gt;
Personal knowledge and science&lt;br /&gt;
Creativity&lt;br /&gt;
Empathy&lt;br /&gt;
Entrepreneurship&lt;br /&gt;
Leadership and control (and ruling the world)&lt;br /&gt;
Complex Systems and Cognitive Science: Why the Replication Problem is here to stay&lt;br /&gt;
&lt;br /&gt;
The &#039;replication problem&#039; is the the inability of scientific communities to independently confirm the results of scientific work. Much has been written on this problem especially as it arises in (social) psychology, and on potential solutions under the heading of &#039;open science&#039;. But we will see that the replication problem has plagued medicine as a positive science since its beginnings (Virchov and Pasteur). This problem has become worse over the last 30 years and has massive consequences for healthcare practice and policy.&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
==Friday May 2 (13:30-16:30) Are We Living in a Simulation?==&lt;br /&gt;
Are we living in a simulation?, Slides&lt;br /&gt;
Video&lt;br /&gt;
The Future of Artificial Intelligence, Slides&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Background Material&lt;br /&gt;
An Introduction to AI for Philosophers&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
Slides&lt;br /&gt;
(AI experts are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
An Introduction to Philosophy for Computer Scientists&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
Slides&lt;br /&gt;
(Philosophers are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
John McCarthy, &amp;quot;What has AI in common with philosophy?&amp;quot;&lt;/div&gt;</summary>
		<author><name>Phismith</name></author>
	</entry>
	<entry>
		<id>https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75528</id>
		<title>Philosophy and Artificial Intelligence 2026</title>
		<link rel="alternate" type="text/html" href="https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75528"/>
		<updated>2026-02-20T11:37:48Z</updated>

		<summary type="html">&lt;p&gt;Phismith: /* 1. Monday, May 4 (13:30-16:15) Introduction */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&#039;&#039;&#039;Philosophy and Artificial Intelligence 2026&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe and Barry Smith&lt;br /&gt;
&lt;br /&gt;
MAP, USI, Lugano, Spring 2026&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;DRAFT VERSION&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
:Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction&lt;br /&gt;
:Tuesday May 5 (09:30-12:15) Transhumanism: The Ultimate Stage of Cartesianism&lt;br /&gt;
:Wednesday, May 6 (09:30 - 12:15) AI and the Theory of Dynamic and Complex Systems&lt;br /&gt;
:Thursday, May 7 20 (09:30 - 12:15) AI and History, Geography and Law&lt;br /&gt;
:Friday, May 8 (13:30 - 16:15) The Impossibility of Artificial General Intelligence &lt;br /&gt;
:Tuesday, May 12 (09:30 - 12:15) AI and Creativity&lt;br /&gt;
:Wednesday, May 13 (09:30 - 12:15) Artificial Intelligence and Human Intelligence; The Limits of AI and the Limits of Physics, Part 1&lt;br /&gt;
:Friday, May 15 (09:30-12:15) The Limits of AI and the Limits of Physics, Part 2&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Introduction&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Artificial Intelligence (AI) is the subfield of Computer Science devoted to developing programs that enable computers to display behavior that can (broadly) be characterized as intelligent. On the strong version, the ultimate goal of AI is to create what is called General Artificial Intelligence (AGI), by which is meant an artificial system that is as intelligent as a human being.&lt;br /&gt;
&lt;br /&gt;
Since its inception in the middle of the last century AI has enjoyed repeated cycles of enthusiasm and disappointment (AI summers and winters). Recent successes of ChatGPT and other Large Language Models (LLMs) have opened a new era of popularization of AI. For the first time, AI tools have been created which are immediately available to the wider population, who for the first time can have real hands-on experience of what AI can do.&lt;br /&gt;
&lt;br /&gt;
These developments in AI open up a series of questions such as:&lt;br /&gt;
&lt;br /&gt;
:Will the powers of AI continue to grow in the future, and if so will they ever reach the point where they can be said to have intelligence equivalent to or greater than that of a human being?&lt;br /&gt;
:Could we ever reach the point where we can accept the thesis that an AI system could have something like consciousness or sentience?&lt;br /&gt;
:Could we reach the point where an AI system could be said to behave ethically, or to have responsibility for its actions?&lt;br /&gt;
:Can quantum computers enable a stronger AI than what we have today?&lt;br /&gt;
:Can a computer have desires, a will, and emotions?&lt;br /&gt;
:Can a computer have responsibility for its behavior?&lt;br /&gt;
:Could a machine have something like a personal identity? Would I really survive if the contents of my brain were uploaded to the cloud?&lt;br /&gt;
&lt;br /&gt;
We will describe in detail how stochastic AI works, and consider these and a series of other questions at the borderlines of philosophy and AI. The class will close with presentations of papers on relevant topics given by students.&lt;br /&gt;
&lt;br /&gt;
Some of the material for this class is derived from our book&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Why Machines Will Never Rule the World: Artificial Intelligence without Fear&#039;&#039; (2nd edition, Routledge 2025).&lt;br /&gt;
and from the companion volume:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Symposium on Why Machines Will Never Rule the World&#039;&#039; — Guest editor, Janna Hastings, University of Zurich&lt;br /&gt;
which appeared as a special issue of the public access journal Cosmos + Taxis in early 2024.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Faculty&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe is the founder and CEO of Cognotekt, GmBH, an AI company based in Cologne specialised in the design and implementation of holistic AI solutions. He has 20 years experience in the AI field, 8 years as a management consultant and software architect. He has also worked as a physician and mathematician, and he views AI itself -- to the extent that it is not an elaborate hype -- as a branch of applied mathematics. CUrrently his primary focus is in the biomathematics of cancer.&lt;br /&gt;
&lt;br /&gt;
Barry Smith is one of the world&#039;s most widely cited philosophers. He has contributed primarily to the field of applied ontology, which means applying philosophical ideas derived from analytical metaphysics to the concrete practical problems which arise where attempts are made to compare or combine heterogeneous bodies of data.&lt;br /&gt;
&lt;br /&gt;
Grading&lt;br /&gt;
&lt;br /&gt;
Essay with presentation: 70%&lt;br /&gt;
Essay with no presentation: 85%&lt;br /&gt;
Presentation: 15%&lt;br /&gt;
Class Participation 15%&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Draft Schedule&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Program&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==1. Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction==&lt;br /&gt;
Part 1: An introduction to the course: Impact of philosophy on AI; impact of AI on philosophy&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
Video&lt;br /&gt;
Why Machines Will Never Rule the World&lt;br /&gt;
Part 2: What are the essential marks of human intelligence?&lt;br /&gt;
&lt;br /&gt;
The classical psychological definitions of intelligence are:  &lt;br /&gt;
&lt;br /&gt;
A. the ability to adapt to new situations (applies both to humans and to animals) &lt;br /&gt;
B. a very general mental capability (possessed only by humans) that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience &lt;br /&gt;
Can a machine be intelligent in either of these senses?&lt;br /&gt;
&lt;br /&gt;
Slides on IQ tests&lt;br /&gt;
Readings:&lt;br /&gt;
&lt;br /&gt;
Linda S. Gottfredson. Mainstream Science on Intelligence. In: Intelligence 24 (1997), pp. 13–23.&lt;br /&gt;
Jobst Landgrebe and Barry Smith: There is no Artificial General Intelligence&lt;br /&gt;
Jobst Landgrebe: Deep reasoning, abstraction and planning&lt;br /&gt;
Background: Ersatz Definitions, Anthropomorphisms, and Pareidolia&lt;br /&gt;
&lt;br /&gt;
There&#039;s no &#039;I&#039; in &#039;AI&#039;, Steven Pemberton, Amsterdam, December 12, 2024&lt;br /&gt;
1. Esatz definitions: using words like &#039;thinks&#039; as in &#039;the machine is thinking&#039;, but with meanings quite different from those we use when talking about human beings. As when we define &#039;flying&#039; as moving through the air, and then jumping up and down and saying &amp;quot;look, I&#039;m flying!&amp;quot;&lt;br /&gt;
2. Pareidolia: a psychological phenomenon that causes people to see patterns, objects, or meaning in ambiguous or unrelated stimuli&lt;br /&gt;
3. If you can&#039;t spot irony, you&#039;re not intelligent&lt;br /&gt;
Tuesday February 18 (09:30-12:15) Limits and Dangers of AI?&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the technical fundamentals of AI: Methods, mathematics, usage as well&lt;br /&gt;
&lt;br /&gt;
2. Outlines the theory of complex systems documented in our book&lt;br /&gt;
&lt;br /&gt;
3. Shows why AI cannot model complex systems adequately and synoptically, and why they therefore cannot reach a level of intelligence equal to that of human beings.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Will AI Destroy Humanity? A Soho Forum Debate (Spoiler: Jobst won)&lt;br /&gt;
&lt;br /&gt;
R.V. Yampolskii, AI: Unexplainable, Unpredictable, Uncontrollable&lt;br /&gt;
&lt;br /&gt;
Arvind Narayanan and Sayash Kapoor, AI Snake Oil&lt;br /&gt;
&lt;br /&gt;
Arnold Schelsky, The Hype Book, especially Chapter 1.&lt;br /&gt;
&lt;br /&gt;
==2	Tuesday May 5 (09:30-12:15) Transhumanism: The Ultimate Stage of Cartesianism ==&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the full spectrum of transhumanism and its cultural origins.&lt;br /&gt;
&lt;br /&gt;
2. Debunk the feasibility of radically improving human beings via technology.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
TESCREALISM, or: why AI gods are so passionate about creating Artificial General Intelligence&lt;br /&gt;
Considering the existential risk of Artificial Superintelligence&lt;br /&gt;
Thursday, February 20 (9:30 - 12:15) Can a machine be conscious?&lt;br /&gt;
The machine will&lt;br /&gt;
&lt;br /&gt;
Computers cannot have a will, because computers don&#039;t give a damn. Therefore there can be no machine ethics&lt;br /&gt;
&lt;br /&gt;
The lack of the giving-a-damn-factor is taken by Yann LeCun as a reason to reject the idea that AI might pose an existential risk to humanity – an AI will have no desire for self-preservation “Almost half of CEOs fear A.I. could destroy humanity five to 10 years from now — but ‘A.I. godfather&#039; says an existential threat is ‘preposterously ridiculous’” Fortune, June 15, 2023. See also here.&lt;br /&gt;
Implications of the absence of a machine will:&lt;br /&gt;
&lt;br /&gt;
The problem of the singularity (when machines will take over from humans) will not arise&lt;br /&gt;
The idea of digital immortality will never be realized Slides&lt;br /&gt;
The idea that human beings are simulations can be rejected&lt;br /&gt;
There can be no AI ethics (only: ethics governing human beings when they use AI)&lt;br /&gt;
Fermi&#039;s paradox is solved&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Searle&#039;s Chinese Room Argument&lt;br /&gt;
Machines cannot have intentionality; they cannot have experiences which are about something.&lt;br /&gt;
&lt;br /&gt;
Searle: Minds, Brains, and Programs&lt;br /&gt;
&lt;br /&gt;
==3.	Wednesday, May 6 (09:30 - 12:15) Transhumanism and digital immortality Are we living in a simulation?==&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
The Fermi Paradox&lt;br /&gt;
&lt;br /&gt;
Bostrom&#039;s Simulation Argument&lt;br /&gt;
&lt;br /&gt;
Background&lt;br /&gt;
&lt;br /&gt;
David Chalmers, Reality+&lt;br /&gt;
&lt;br /&gt;
Dialog with Chalmers avatar&lt;br /&gt;
&lt;br /&gt;
==4. Thursday, May 7 (09:30 - 12:15) Can a machine be conscious? An introduction to the statistical foundations of AI==&lt;br /&gt;
An introduction to the statistical foundations of AI&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
The types of AI&lt;br /&gt;
&lt;br /&gt;
Deterministic AI&lt;br /&gt;
Good old fashioned AI (GOFAI)&lt;br /&gt;
Basic stochastic AI&lt;br /&gt;
How regression works&lt;br /&gt;
Advanced stochastic AI&lt;br /&gt;
Neural networks and deep learning&lt;br /&gt;
Hybrid&lt;br /&gt;
Neurosymbolic AI&lt;br /&gt;
Background&lt;br /&gt;
Why machines will never rule the world, chapter 7 (chapter 8 of 2nd edition)&lt;br /&gt;
&lt;br /&gt;
==5.	Friday, May 8 (13:30 - 16:15) Are we living in a simulation==&lt;br /&gt;
Personal knowledge&lt;br /&gt;
&lt;br /&gt;
Explicit, implicit, practical, personal and tacit knowledge&lt;br /&gt;
Video&lt;br /&gt;
Knowing how vs Knowing that&lt;br /&gt;
Personal knowledge and science&lt;br /&gt;
Creativity&lt;br /&gt;
Empathy&lt;br /&gt;
Entrepreneurship&lt;br /&gt;
Leadership and control (and ruling the world)&lt;br /&gt;
Complex Systems and Cognitive Science: Why the Replication Problem is here to stay&lt;br /&gt;
&lt;br /&gt;
The &#039;replication problem&#039; is the the inability of scientific communities to independently confirm the results of scientific work. Much has been written on this problem especially as it arises in (social) psychology, and on potential solutions under the heading of &#039;open science&#039;. But we will see that the replication problem has plagued medicine as a positive science since its beginnings (Virchov and Pasteur). This problem has become worse over the last 30 years and has massive consequences for healthcare practice and policy.&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
==Friday May 2 (13:30-16:30) Are We Living in a Simulation?==&lt;br /&gt;
Are we living in a simulation?, Slides&lt;br /&gt;
Video&lt;br /&gt;
The Future of Artificial Intelligence, Slides&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Background Material&lt;br /&gt;
An Introduction to AI for Philosophers&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
Slides&lt;br /&gt;
(AI experts are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
An Introduction to Philosophy for Computer Scientists&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
Slides&lt;br /&gt;
(Philosophers are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
John McCarthy, &amp;quot;What has AI in common with philosophy?&amp;quot;&lt;/div&gt;</summary>
		<author><name>Phismith</name></author>
	</entry>
	<entry>
		<id>https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75527</id>
		<title>Philosophy and Artificial Intelligence 2026</title>
		<link rel="alternate" type="text/html" href="https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75527"/>
		<updated>2026-02-20T11:13:32Z</updated>

		<summary type="html">&lt;p&gt;Phismith: /* 2	Tuesday May 5 (09:30-12:15) Transhumanism as the Ultimate Stage of Cartesianism */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&#039;&#039;&#039;Philosophy and Artificial Intelligence 2026&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe and Barry Smith&lt;br /&gt;
&lt;br /&gt;
MAP, USI, Lugano, Spring 2026&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;DRAFT VERSION&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
:Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction&lt;br /&gt;
:Tuesday May 5 (09:30-12:15) Transhumanism: The Ultimate Stage of Cartesianism&lt;br /&gt;
:Wednesday, May 6 (09:30 - 12:15) AI and the Theory of Dynamic and Complex Systems&lt;br /&gt;
:Thursday, May 7 20 (09:30 - 12:15) AI and History, Geography and Law&lt;br /&gt;
:Friday, May 8 (13:30 - 16:15) The Impossibility of Artificial General Intelligence &lt;br /&gt;
:Tuesday, May 12 (09:30 - 12:15) AI and Creativity&lt;br /&gt;
:Wednesday, May 13 (09:30 - 12:15) Artificial Intelligence and Human Intelligence; The Limits of AI and the Limits of Physics, Part 1&lt;br /&gt;
:Friday, May 15 (09:30-12:15) The Limits of AI and the Limits of Physics, Part 2&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Introduction&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Artificial Intelligence (AI) is the subfield of Computer Science devoted to developing programs that enable computers to display behavior that can (broadly) be characterized as intelligent. On the strong version, the ultimate goal of AI is to create what is called General Artificial Intelligence (AGI), by which is meant an artificial system that is as intelligent as a human being.&lt;br /&gt;
&lt;br /&gt;
Since its inception in the middle of the last century AI has enjoyed repeated cycles of enthusiasm and disappointment (AI summers and winters). Recent successes of ChatGPT and other Large Language Models (LLMs) have opened a new era of popularization of AI. For the first time, AI tools have been created which are immediately available to the wider population, who for the first time can have real hands-on experience of what AI can do.&lt;br /&gt;
&lt;br /&gt;
These developments in AI open up a series of questions such as:&lt;br /&gt;
&lt;br /&gt;
:Will the powers of AI continue to grow in the future, and if so will they ever reach the point where they can be said to have intelligence equivalent to or greater than that of a human being?&lt;br /&gt;
:Could we ever reach the point where we can accept the thesis that an AI system could have something like consciousness or sentience?&lt;br /&gt;
:Could we reach the point where an AI system could be said to behave ethically, or to have responsibility for its actions?&lt;br /&gt;
:Can quantum computers enable a stronger AI than what we have today?&lt;br /&gt;
:Can a computer have desires, a will, and emotions?&lt;br /&gt;
:Can a computer have responsibility for its behavior?&lt;br /&gt;
:Could a machine have something like a personal identity? Would I really survive if the contents of my brain were uploaded to the cloud?&lt;br /&gt;
&lt;br /&gt;
We will describe in detail how stochastic AI works, and consider these and a series of other questions at the borderlines of philosophy and AI. The class will close with presentations of papers on relevant topics given by students.&lt;br /&gt;
&lt;br /&gt;
Some of the material for this class is derived from our book&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Why Machines Will Never Rule the World: Artificial Intelligence without Fear&#039;&#039; (2nd edition, Routledge 2025).&lt;br /&gt;
and from the companion volume:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Symposium on Why Machines Will Never Rule the World&#039;&#039; — Guest editor, Janna Hastings, University of Zurich&lt;br /&gt;
which appeared as a special issue of the public access journal Cosmos + Taxis in early 2024.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Faculty&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe is the founder and CEO of Cognotekt, GmBH, an AI company based in Cologne specialised in the design and implementation of holistic AI solutions. He has 20 years experience in the AI field, 8 years as a management consultant and software architect. He has also worked as a physician and mathematician, and he views AI itself -- to the extent that it is not an elaborate hype -- as a branch of applied mathematics. CUrrently his primary focus is in the biomathematics of cancer.&lt;br /&gt;
&lt;br /&gt;
Barry Smith is one of the world&#039;s most widely cited philosophers. He has contributed primarily to the field of applied ontology, which means applying philosophical ideas derived from analytical metaphysics to the concrete practical problems which arise where attempts are made to compare or combine heterogeneous bodies of data.&lt;br /&gt;
&lt;br /&gt;
Grading&lt;br /&gt;
&lt;br /&gt;
Essay with presentation: 70%&lt;br /&gt;
Essay with no presentation: 85%&lt;br /&gt;
Presentation: 15%&lt;br /&gt;
Class Participation 15%&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Draft Schedule&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Program&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==1. Monday, May 4 (13:30-16:15) Introduction==&lt;br /&gt;
Part 1: An introduction to the course: Impact of philosophy on AI; impact of AI on philosophy&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
Video&lt;br /&gt;
Why Machines Will Never Rule the World&lt;br /&gt;
Part 2: What are the essential marks of human intelligence?&lt;br /&gt;
&lt;br /&gt;
The classical psychological definitions of intelligence are:  &lt;br /&gt;
&lt;br /&gt;
A. the ability to adapt to new situations (applies both to humans and to animals) &lt;br /&gt;
B. a very general mental capability (possessed only by humans) that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience &lt;br /&gt;
Can a machine be intelligent in either of these senses?&lt;br /&gt;
&lt;br /&gt;
Slides on IQ tests&lt;br /&gt;
Readings:&lt;br /&gt;
&lt;br /&gt;
Linda S. Gottfredson. Mainstream Science on Intelligence. In: Intelligence 24 (1997), pp. 13–23.&lt;br /&gt;
Jobst Landgrebe and Barry Smith: There is no Artificial General Intelligence&lt;br /&gt;
Jobst Landgrebe: Deep reasoning, abstraction and planning&lt;br /&gt;
Background: Ersatz Definitions, Anthropomorphisms, and Pareidolia&lt;br /&gt;
&lt;br /&gt;
There&#039;s no &#039;I&#039; in &#039;AI&#039;, Steven Pemberton, Amsterdam, December 12, 2024&lt;br /&gt;
1. Esatz definitions: using words like &#039;thinks&#039; as in &#039;the machine is thinking&#039;, but with meanings quite different from those we use when talking about human beings. As when we define &#039;flying&#039; as moving through the air, and then jumping up and down and saying &amp;quot;look, I&#039;m flying!&amp;quot;&lt;br /&gt;
2. Pareidolia: a psychological phenomenon that causes people to see patterns, objects, or meaning in ambiguous or unrelated stimuli&lt;br /&gt;
3. If you can&#039;t spot irony, you&#039;re not intelligent&lt;br /&gt;
Tuesday February 18 (09:30-12:15) Limits and Dangers of AI?&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the technical fundamentals of AI: Methods, mathematics, usage as well&lt;br /&gt;
&lt;br /&gt;
2. Outlines the theory of complex systems documented in our book&lt;br /&gt;
&lt;br /&gt;
3. Shows why AI cannot model complex systems adequately and synoptically, and why they therefore cannot reach a level of intelligence equal to that of human beings.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Will AI Destroy Humanity? A Soho Forum Debate (Spoiler: Jobst won)&lt;br /&gt;
&lt;br /&gt;
R.V. Yampolskii, AI: Unexplainable, Unpredictable, Uncontrollable&lt;br /&gt;
&lt;br /&gt;
Arvind Narayanan and Sayash Kapoor, AI Snake Oil&lt;br /&gt;
&lt;br /&gt;
Arnold Schelsky, The Hype Book, especially Chapter 1.&lt;br /&gt;
&lt;br /&gt;
==2	Tuesday May 5 (09:30-12:15) Transhumanism: The Ultimate Stage of Cartesianism ==&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the full spectrum of transhumanism and its cultural origins.&lt;br /&gt;
&lt;br /&gt;
2. Debunk the feasibility of radically improving human beings via technology.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
TESCREALISM, or: why AI gods are so passionate about creating Artificial General Intelligence&lt;br /&gt;
Considering the existential risk of Artificial Superintelligence&lt;br /&gt;
Thursday, February 20 (9:30 - 12:15) Can a machine be conscious?&lt;br /&gt;
The machine will&lt;br /&gt;
&lt;br /&gt;
Computers cannot have a will, because computers don&#039;t give a damn. Therefore there can be no machine ethics&lt;br /&gt;
&lt;br /&gt;
The lack of the giving-a-damn-factor is taken by Yann LeCun as a reason to reject the idea that AI might pose an existential risk to humanity – an AI will have no desire for self-preservation “Almost half of CEOs fear A.I. could destroy humanity five to 10 years from now — but ‘A.I. godfather&#039; says an existential threat is ‘preposterously ridiculous’” Fortune, June 15, 2023. See also here.&lt;br /&gt;
Implications of the absence of a machine will:&lt;br /&gt;
&lt;br /&gt;
The problem of the singularity (when machines will take over from humans) will not arise&lt;br /&gt;
The idea of digital immortality will never be realized Slides&lt;br /&gt;
The idea that human beings are simulations can be rejected&lt;br /&gt;
There can be no AI ethics (only: ethics governing human beings when they use AI)&lt;br /&gt;
Fermi&#039;s paradox is solved&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Searle&#039;s Chinese Room Argument&lt;br /&gt;
Machines cannot have intentionality; they cannot have experiences which are about something.&lt;br /&gt;
&lt;br /&gt;
Searle: Minds, Brains, and Programs&lt;br /&gt;
&lt;br /&gt;
==3.	Wednesday, May 6 (09:30 - 12:15) Transhumanism and digital immortality Are we living in a simulation?==&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
The Fermi Paradox&lt;br /&gt;
&lt;br /&gt;
Bostrom&#039;s Simulation Argument&lt;br /&gt;
&lt;br /&gt;
Background&lt;br /&gt;
&lt;br /&gt;
David Chalmers, Reality+&lt;br /&gt;
&lt;br /&gt;
Dialog with Chalmers avatar&lt;br /&gt;
&lt;br /&gt;
==4. Thursday, May 7 (09:30 - 12:15) Can a machine be conscious? An introduction to the statistical foundations of AI==&lt;br /&gt;
An introduction to the statistical foundations of AI&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
The types of AI&lt;br /&gt;
&lt;br /&gt;
Deterministic AI&lt;br /&gt;
Good old fashioned AI (GOFAI)&lt;br /&gt;
Basic stochastic AI&lt;br /&gt;
How regression works&lt;br /&gt;
Advanced stochastic AI&lt;br /&gt;
Neural networks and deep learning&lt;br /&gt;
Hybrid&lt;br /&gt;
Neurosymbolic AI&lt;br /&gt;
Background&lt;br /&gt;
Why machines will never rule the world, chapter 7 (chapter 8 of 2nd edition)&lt;br /&gt;
&lt;br /&gt;
==5.	Friday, May 8 (13:30 - 16:15) Are we living in a simulation==&lt;br /&gt;
Personal knowledge&lt;br /&gt;
&lt;br /&gt;
Explicit, implicit, practical, personal and tacit knowledge&lt;br /&gt;
Video&lt;br /&gt;
Knowing how vs Knowing that&lt;br /&gt;
Personal knowledge and science&lt;br /&gt;
Creativity&lt;br /&gt;
Empathy&lt;br /&gt;
Entrepreneurship&lt;br /&gt;
Leadership and control (and ruling the world)&lt;br /&gt;
Complex Systems and Cognitive Science: Why the Replication Problem is here to stay&lt;br /&gt;
&lt;br /&gt;
The &#039;replication problem&#039; is the the inability of scientific communities to independently confirm the results of scientific work. Much has been written on this problem especially as it arises in (social) psychology, and on potential solutions under the heading of &#039;open science&#039;. But we will see that the replication problem has plagued medicine as a positive science since its beginnings (Virchov and Pasteur). This problem has become worse over the last 30 years and has massive consequences for healthcare practice and policy.&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
==Friday May 2 (13:30-16:30) Are We Living in a Simulation?==&lt;br /&gt;
Are we living in a simulation?, Slides&lt;br /&gt;
Video&lt;br /&gt;
The Future of Artificial Intelligence, Slides&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Background Material&lt;br /&gt;
An Introduction to AI for Philosophers&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
Slides&lt;br /&gt;
(AI experts are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
An Introduction to Philosophy for Computer Scientists&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
Slides&lt;br /&gt;
(Philosophers are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
John McCarthy, &amp;quot;What has AI in common with philosophy?&amp;quot;&lt;/div&gt;</summary>
		<author><name>Phismith</name></author>
	</entry>
	<entry>
		<id>https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75526</id>
		<title>Philosophy and Artificial Intelligence 2026</title>
		<link rel="alternate" type="text/html" href="https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75526"/>
		<updated>2026-02-20T11:13:12Z</updated>

		<summary type="html">&lt;p&gt;Phismith: /* 2	Tuesday May 5 (09:30-12:15) Limits and Dangers of AI? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&#039;&#039;&#039;Philosophy and Artificial Intelligence 2026&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe and Barry Smith&lt;br /&gt;
&lt;br /&gt;
MAP, USI, Lugano, Spring 2026&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;DRAFT VERSION&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
:Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction&lt;br /&gt;
:Tuesday May 5 (09:30-12:15) Transhumanism: The Ultimate Stage of Cartesianism&lt;br /&gt;
:Wednesday, May 6 (09:30 - 12:15) AI and the Theory of Dynamic and Complex Systems&lt;br /&gt;
:Thursday, May 7 20 (09:30 - 12:15) AI and History, Geography and Law&lt;br /&gt;
:Friday, May 8 (13:30 - 16:15) The Impossibility of Artificial General Intelligence &lt;br /&gt;
:Tuesday, May 12 (09:30 - 12:15) AI and Creativity&lt;br /&gt;
:Wednesday, May 13 (09:30 - 12:15) Artificial Intelligence and Human Intelligence; The Limits of AI and the Limits of Physics, Part 1&lt;br /&gt;
:Friday, May 15 (09:30-12:15) The Limits of AI and the Limits of Physics, Part 2&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Introduction&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Artificial Intelligence (AI) is the subfield of Computer Science devoted to developing programs that enable computers to display behavior that can (broadly) be characterized as intelligent. On the strong version, the ultimate goal of AI is to create what is called General Artificial Intelligence (AGI), by which is meant an artificial system that is as intelligent as a human being.&lt;br /&gt;
&lt;br /&gt;
Since its inception in the middle of the last century AI has enjoyed repeated cycles of enthusiasm and disappointment (AI summers and winters). Recent successes of ChatGPT and other Large Language Models (LLMs) have opened a new era of popularization of AI. For the first time, AI tools have been created which are immediately available to the wider population, who for the first time can have real hands-on experience of what AI can do.&lt;br /&gt;
&lt;br /&gt;
These developments in AI open up a series of questions such as:&lt;br /&gt;
&lt;br /&gt;
:Will the powers of AI continue to grow in the future, and if so will they ever reach the point where they can be said to have intelligence equivalent to or greater than that of a human being?&lt;br /&gt;
:Could we ever reach the point where we can accept the thesis that an AI system could have something like consciousness or sentience?&lt;br /&gt;
:Could we reach the point where an AI system could be said to behave ethically, or to have responsibility for its actions?&lt;br /&gt;
:Can quantum computers enable a stronger AI than what we have today?&lt;br /&gt;
:Can a computer have desires, a will, and emotions?&lt;br /&gt;
:Can a computer have responsibility for its behavior?&lt;br /&gt;
:Could a machine have something like a personal identity? Would I really survive if the contents of my brain were uploaded to the cloud?&lt;br /&gt;
&lt;br /&gt;
We will describe in detail how stochastic AI works, and consider these and a series of other questions at the borderlines of philosophy and AI. The class will close with presentations of papers on relevant topics given by students.&lt;br /&gt;
&lt;br /&gt;
Some of the material for this class is derived from our book&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Why Machines Will Never Rule the World: Artificial Intelligence without Fear&#039;&#039; (2nd edition, Routledge 2025).&lt;br /&gt;
and from the companion volume:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Symposium on Why Machines Will Never Rule the World&#039;&#039; — Guest editor, Janna Hastings, University of Zurich&lt;br /&gt;
which appeared as a special issue of the public access journal Cosmos + Taxis in early 2024.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Faculty&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe is the founder and CEO of Cognotekt, GmBH, an AI company based in Cologne specialised in the design and implementation of holistic AI solutions. He has 20 years experience in the AI field, 8 years as a management consultant and software architect. He has also worked as a physician and mathematician, and he views AI itself -- to the extent that it is not an elaborate hype -- as a branch of applied mathematics. CUrrently his primary focus is in the biomathematics of cancer.&lt;br /&gt;
&lt;br /&gt;
Barry Smith is one of the world&#039;s most widely cited philosophers. He has contributed primarily to the field of applied ontology, which means applying philosophical ideas derived from analytical metaphysics to the concrete practical problems which arise where attempts are made to compare or combine heterogeneous bodies of data.&lt;br /&gt;
&lt;br /&gt;
Grading&lt;br /&gt;
&lt;br /&gt;
Essay with presentation: 70%&lt;br /&gt;
Essay with no presentation: 85%&lt;br /&gt;
Presentation: 15%&lt;br /&gt;
Class Participation 15%&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Draft Schedule&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Program&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==1. Monday, May 4 (13:30-16:15) Introduction==&lt;br /&gt;
Part 1: An introduction to the course: Impact of philosophy on AI; impact of AI on philosophy&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
Video&lt;br /&gt;
Why Machines Will Never Rule the World&lt;br /&gt;
Part 2: What are the essential marks of human intelligence?&lt;br /&gt;
&lt;br /&gt;
The classical psychological definitions of intelligence are:  &lt;br /&gt;
&lt;br /&gt;
A. the ability to adapt to new situations (applies both to humans and to animals) &lt;br /&gt;
B. a very general mental capability (possessed only by humans) that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience &lt;br /&gt;
Can a machine be intelligent in either of these senses?&lt;br /&gt;
&lt;br /&gt;
Slides on IQ tests&lt;br /&gt;
Readings:&lt;br /&gt;
&lt;br /&gt;
Linda S. Gottfredson. Mainstream Science on Intelligence. In: Intelligence 24 (1997), pp. 13–23.&lt;br /&gt;
Jobst Landgrebe and Barry Smith: There is no Artificial General Intelligence&lt;br /&gt;
Jobst Landgrebe: Deep reasoning, abstraction and planning&lt;br /&gt;
Background: Ersatz Definitions, Anthropomorphisms, and Pareidolia&lt;br /&gt;
&lt;br /&gt;
There&#039;s no &#039;I&#039; in &#039;AI&#039;, Steven Pemberton, Amsterdam, December 12, 2024&lt;br /&gt;
1. Esatz definitions: using words like &#039;thinks&#039; as in &#039;the machine is thinking&#039;, but with meanings quite different from those we use when talking about human beings. As when we define &#039;flying&#039; as moving through the air, and then jumping up and down and saying &amp;quot;look, I&#039;m flying!&amp;quot;&lt;br /&gt;
2. Pareidolia: a psychological phenomenon that causes people to see patterns, objects, or meaning in ambiguous or unrelated stimuli&lt;br /&gt;
3. If you can&#039;t spot irony, you&#039;re not intelligent&lt;br /&gt;
Tuesday February 18 (09:30-12:15) Limits and Dangers of AI?&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the technical fundamentals of AI: Methods, mathematics, usage as well&lt;br /&gt;
&lt;br /&gt;
2. Outlines the theory of complex systems documented in our book&lt;br /&gt;
&lt;br /&gt;
3. Shows why AI cannot model complex systems adequately and synoptically, and why they therefore cannot reach a level of intelligence equal to that of human beings.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Will AI Destroy Humanity? A Soho Forum Debate (Spoiler: Jobst won)&lt;br /&gt;
&lt;br /&gt;
R.V. Yampolskii, AI: Unexplainable, Unpredictable, Uncontrollable&lt;br /&gt;
&lt;br /&gt;
Arvind Narayanan and Sayash Kapoor, AI Snake Oil&lt;br /&gt;
&lt;br /&gt;
Arnold Schelsky, The Hype Book, especially Chapter 1.&lt;br /&gt;
&lt;br /&gt;
==2	Tuesday May 5 (09:30-12:15) Transhumanism as the Ultimate Stage of Cartesianism ==&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the full spectrum of transhumanism and its cultural origins.&lt;br /&gt;
&lt;br /&gt;
2. Debunk the feasibility of radically improving human beings via technology.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
TESCREALISM, or: why AI gods are so passionate about creating Artificial General Intelligence&lt;br /&gt;
Considering the existential risk of Artificial Superintelligence&lt;br /&gt;
Thursday, February 20 (9:30 - 12:15) Can a machine be conscious?&lt;br /&gt;
The machine will&lt;br /&gt;
&lt;br /&gt;
Computers cannot have a will, because computers don&#039;t give a damn. Therefore there can be no machine ethics&lt;br /&gt;
&lt;br /&gt;
The lack of the giving-a-damn-factor is taken by Yann LeCun as a reason to reject the idea that AI might pose an existential risk to humanity – an AI will have no desire for self-preservation “Almost half of CEOs fear A.I. could destroy humanity five to 10 years from now — but ‘A.I. godfather&#039; says an existential threat is ‘preposterously ridiculous’” Fortune, June 15, 2023. See also here.&lt;br /&gt;
Implications of the absence of a machine will:&lt;br /&gt;
&lt;br /&gt;
The problem of the singularity (when machines will take over from humans) will not arise&lt;br /&gt;
The idea of digital immortality will never be realized Slides&lt;br /&gt;
The idea that human beings are simulations can be rejected&lt;br /&gt;
There can be no AI ethics (only: ethics governing human beings when they use AI)&lt;br /&gt;
Fermi&#039;s paradox is solved&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Searle&#039;s Chinese Room Argument&lt;br /&gt;
Machines cannot have intentionality; they cannot have experiences which are about something.&lt;br /&gt;
&lt;br /&gt;
Searle: Minds, Brains, and Programs&lt;br /&gt;
&lt;br /&gt;
==3.	Wednesday, May 6 (09:30 - 12:15) Transhumanism and digital immortality Are we living in a simulation?==&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
The Fermi Paradox&lt;br /&gt;
&lt;br /&gt;
Bostrom&#039;s Simulation Argument&lt;br /&gt;
&lt;br /&gt;
Background&lt;br /&gt;
&lt;br /&gt;
David Chalmers, Reality+&lt;br /&gt;
&lt;br /&gt;
Dialog with Chalmers avatar&lt;br /&gt;
&lt;br /&gt;
==4. Thursday, May 7 (09:30 - 12:15) Can a machine be conscious? An introduction to the statistical foundations of AI==&lt;br /&gt;
An introduction to the statistical foundations of AI&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
The types of AI&lt;br /&gt;
&lt;br /&gt;
Deterministic AI&lt;br /&gt;
Good old fashioned AI (GOFAI)&lt;br /&gt;
Basic stochastic AI&lt;br /&gt;
How regression works&lt;br /&gt;
Advanced stochastic AI&lt;br /&gt;
Neural networks and deep learning&lt;br /&gt;
Hybrid&lt;br /&gt;
Neurosymbolic AI&lt;br /&gt;
Background&lt;br /&gt;
Why machines will never rule the world, chapter 7 (chapter 8 of 2nd edition)&lt;br /&gt;
&lt;br /&gt;
==5.	Friday, May 8 (13:30 - 16:15) Are we living in a simulation==&lt;br /&gt;
Personal knowledge&lt;br /&gt;
&lt;br /&gt;
Explicit, implicit, practical, personal and tacit knowledge&lt;br /&gt;
Video&lt;br /&gt;
Knowing how vs Knowing that&lt;br /&gt;
Personal knowledge and science&lt;br /&gt;
Creativity&lt;br /&gt;
Empathy&lt;br /&gt;
Entrepreneurship&lt;br /&gt;
Leadership and control (and ruling the world)&lt;br /&gt;
Complex Systems and Cognitive Science: Why the Replication Problem is here to stay&lt;br /&gt;
&lt;br /&gt;
The &#039;replication problem&#039; is the the inability of scientific communities to independently confirm the results of scientific work. Much has been written on this problem especially as it arises in (social) psychology, and on potential solutions under the heading of &#039;open science&#039;. But we will see that the replication problem has plagued medicine as a positive science since its beginnings (Virchov and Pasteur). This problem has become worse over the last 30 years and has massive consequences for healthcare practice and policy.&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
==Friday May 2 (13:30-16:30) Are We Living in a Simulation?==&lt;br /&gt;
Are we living in a simulation?, Slides&lt;br /&gt;
Video&lt;br /&gt;
The Future of Artificial Intelligence, Slides&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Background Material&lt;br /&gt;
An Introduction to AI for Philosophers&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
Slides&lt;br /&gt;
(AI experts are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
An Introduction to Philosophy for Computer Scientists&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
Slides&lt;br /&gt;
(Philosophers are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
John McCarthy, &amp;quot;What has AI in common with philosophy?&amp;quot;&lt;/div&gt;</summary>
		<author><name>Phismith</name></author>
	</entry>
	<entry>
		<id>https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75525</id>
		<title>Philosophy and Artificial Intelligence 2026</title>
		<link rel="alternate" type="text/html" href="https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75525"/>
		<updated>2026-02-20T11:11:41Z</updated>

		<summary type="html">&lt;p&gt;Phismith: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&#039;&#039;&#039;Philosophy and Artificial Intelligence 2026&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe and Barry Smith&lt;br /&gt;
&lt;br /&gt;
MAP, USI, Lugano, Spring 2026&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;DRAFT VERSION&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
:Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction&lt;br /&gt;
:Tuesday May 5 (09:30-12:15) Transhumanism: The Ultimate Stage of Cartesianism&lt;br /&gt;
:Wednesday, May 6 (09:30 - 12:15) AI and the Theory of Dynamic and Complex Systems&lt;br /&gt;
:Thursday, May 7 20 (09:30 - 12:15) AI and History, Geography and Law&lt;br /&gt;
:Friday, May 8 (13:30 - 16:15) The Impossibility of Artificial General Intelligence &lt;br /&gt;
:Tuesday, May 12 (09:30 - 12:15) AI and Creativity&lt;br /&gt;
:Wednesday, May 13 (09:30 - 12:15) Artificial Intelligence and Human Intelligence; The Limits of AI and the Limits of Physics, Part 1&lt;br /&gt;
:Friday, May 15 (09:30-12:15) The Limits of AI and the Limits of Physics, Part 2&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Introduction&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Artificial Intelligence (AI) is the subfield of Computer Science devoted to developing programs that enable computers to display behavior that can (broadly) be characterized as intelligent. On the strong version, the ultimate goal of AI is to create what is called General Artificial Intelligence (AGI), by which is meant an artificial system that is as intelligent as a human being.&lt;br /&gt;
&lt;br /&gt;
Since its inception in the middle of the last century AI has enjoyed repeated cycles of enthusiasm and disappointment (AI summers and winters). Recent successes of ChatGPT and other Large Language Models (LLMs) have opened a new era of popularization of AI. For the first time, AI tools have been created which are immediately available to the wider population, who for the first time can have real hands-on experience of what AI can do.&lt;br /&gt;
&lt;br /&gt;
These developments in AI open up a series of questions such as:&lt;br /&gt;
&lt;br /&gt;
:Will the powers of AI continue to grow in the future, and if so will they ever reach the point where they can be said to have intelligence equivalent to or greater than that of a human being?&lt;br /&gt;
:Could we ever reach the point where we can accept the thesis that an AI system could have something like consciousness or sentience?&lt;br /&gt;
:Could we reach the point where an AI system could be said to behave ethically, or to have responsibility for its actions?&lt;br /&gt;
:Can quantum computers enable a stronger AI than what we have today?&lt;br /&gt;
:Can a computer have desires, a will, and emotions?&lt;br /&gt;
:Can a computer have responsibility for its behavior?&lt;br /&gt;
:Could a machine have something like a personal identity? Would I really survive if the contents of my brain were uploaded to the cloud?&lt;br /&gt;
&lt;br /&gt;
We will describe in detail how stochastic AI works, and consider these and a series of other questions at the borderlines of philosophy and AI. The class will close with presentations of papers on relevant topics given by students.&lt;br /&gt;
&lt;br /&gt;
Some of the material for this class is derived from our book&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Why Machines Will Never Rule the World: Artificial Intelligence without Fear&#039;&#039; (2nd edition, Routledge 2025).&lt;br /&gt;
and from the companion volume:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Symposium on Why Machines Will Never Rule the World&#039;&#039; — Guest editor, Janna Hastings, University of Zurich&lt;br /&gt;
which appeared as a special issue of the public access journal Cosmos + Taxis in early 2024.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Faculty&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe is the founder and CEO of Cognotekt, GmBH, an AI company based in Cologne specialised in the design and implementation of holistic AI solutions. He has 20 years experience in the AI field, 8 years as a management consultant and software architect. He has also worked as a physician and mathematician, and he views AI itself -- to the extent that it is not an elaborate hype -- as a branch of applied mathematics. CUrrently his primary focus is in the biomathematics of cancer.&lt;br /&gt;
&lt;br /&gt;
Barry Smith is one of the world&#039;s most widely cited philosophers. He has contributed primarily to the field of applied ontology, which means applying philosophical ideas derived from analytical metaphysics to the concrete practical problems which arise where attempts are made to compare or combine heterogeneous bodies of data.&lt;br /&gt;
&lt;br /&gt;
Grading&lt;br /&gt;
&lt;br /&gt;
Essay with presentation: 70%&lt;br /&gt;
Essay with no presentation: 85%&lt;br /&gt;
Presentation: 15%&lt;br /&gt;
Class Participation 15%&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Draft Schedule&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Program&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==1. Monday, May 4 (13:30-16:15) Introduction==&lt;br /&gt;
Part 1: An introduction to the course: Impact of philosophy on AI; impact of AI on philosophy&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
Video&lt;br /&gt;
Why Machines Will Never Rule the World&lt;br /&gt;
Part 2: What are the essential marks of human intelligence?&lt;br /&gt;
&lt;br /&gt;
The classical psychological definitions of intelligence are:  &lt;br /&gt;
&lt;br /&gt;
A. the ability to adapt to new situations (applies both to humans and to animals) &lt;br /&gt;
B. a very general mental capability (possessed only by humans) that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience &lt;br /&gt;
Can a machine be intelligent in either of these senses?&lt;br /&gt;
&lt;br /&gt;
Slides on IQ tests&lt;br /&gt;
Readings:&lt;br /&gt;
&lt;br /&gt;
Linda S. Gottfredson. Mainstream Science on Intelligence. In: Intelligence 24 (1997), pp. 13–23.&lt;br /&gt;
Jobst Landgrebe and Barry Smith: There is no Artificial General Intelligence&lt;br /&gt;
Jobst Landgrebe: Deep reasoning, abstraction and planning&lt;br /&gt;
Background: Ersatz Definitions, Anthropomorphisms, and Pareidolia&lt;br /&gt;
&lt;br /&gt;
There&#039;s no &#039;I&#039; in &#039;AI&#039;, Steven Pemberton, Amsterdam, December 12, 2024&lt;br /&gt;
1. Esatz definitions: using words like &#039;thinks&#039; as in &#039;the machine is thinking&#039;, but with meanings quite different from those we use when talking about human beings. As when we define &#039;flying&#039; as moving through the air, and then jumping up and down and saying &amp;quot;look, I&#039;m flying!&amp;quot;&lt;br /&gt;
2. Pareidolia: a psychological phenomenon that causes people to see patterns, objects, or meaning in ambiguous or unrelated stimuli&lt;br /&gt;
3. If you can&#039;t spot irony, you&#039;re not intelligent&lt;br /&gt;
Tuesday February 18 (09:30-12:15) Limits and Dangers of AI?&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the technical fundamentals of AI: Methods, mathematics, usage as well&lt;br /&gt;
&lt;br /&gt;
2. Outlines the theory of complex systems documented in our book&lt;br /&gt;
&lt;br /&gt;
3. Shows why AI cannot model complex systems adequately and synoptically, and why they therefore cannot reach a level of intelligence equal to that of human beings.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Will AI Destroy Humanity? A Soho Forum Debate (Spoiler: Jobst won)&lt;br /&gt;
&lt;br /&gt;
R.V. Yampolskii, AI: Unexplainable, Unpredictable, Uncontrollable&lt;br /&gt;
&lt;br /&gt;
Arvind Narayanan and Sayash Kapoor, AI Snake Oil&lt;br /&gt;
&lt;br /&gt;
Arnold Schelsky, The Hype Book, especially Chapter 1.&lt;br /&gt;
&lt;br /&gt;
==2	Tuesday May 5 (09:30-12:15) Limits and Dangers of AI? ==&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the full spectrum of transhumanism and its cultural origins.&lt;br /&gt;
&lt;br /&gt;
2. Debunk the feasibility of radically improving human beings via technology.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
TESCREALISM, or: why AI gods are so passionate about creating Artificial General Intelligence&lt;br /&gt;
Considering the existential risk of Artificial Superintelligence&lt;br /&gt;
Thursday, February 20 (9:30 - 12:15) Can a machine be conscious?&lt;br /&gt;
The machine will&lt;br /&gt;
&lt;br /&gt;
Computers cannot have a will, because computers don&#039;t give a damn. Therefore there can be no machine ethics&lt;br /&gt;
&lt;br /&gt;
The lack of the giving-a-damn-factor is taken by Yann LeCun as a reason to reject the idea that AI might pose an existential risk to humanity – an AI will have no desire for self-preservation “Almost half of CEOs fear A.I. could destroy humanity five to 10 years from now — but ‘A.I. godfather&#039; says an existential threat is ‘preposterously ridiculous’” Fortune, June 15, 2023. See also here.&lt;br /&gt;
Implications of the absence of a machine will:&lt;br /&gt;
&lt;br /&gt;
The problem of the singularity (when machines will take over from humans) will not arise&lt;br /&gt;
The idea of digital immortality will never be realized Slides&lt;br /&gt;
The idea that human beings are simulations can be rejected&lt;br /&gt;
There can be no AI ethics (only: ethics governing human beings when they use AI)&lt;br /&gt;
Fermi&#039;s paradox is solved&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Searle&#039;s Chinese Room Argument&lt;br /&gt;
Machines cannot have intentionality; they cannot have experiences which are about something.&lt;br /&gt;
&lt;br /&gt;
Searle: Minds, Brains, and Programs&lt;br /&gt;
&lt;br /&gt;
==3.	Wednesday, May 6 (09:30 - 12:15) Transhumanism and digital immortality Are we living in a simulation?==&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
The Fermi Paradox&lt;br /&gt;
&lt;br /&gt;
Bostrom&#039;s Simulation Argument&lt;br /&gt;
&lt;br /&gt;
Background&lt;br /&gt;
&lt;br /&gt;
David Chalmers, Reality+&lt;br /&gt;
&lt;br /&gt;
Dialog with Chalmers avatar&lt;br /&gt;
&lt;br /&gt;
==4. Thursday, May 7 (09:30 - 12:15) Can a machine be conscious? An introduction to the statistical foundations of AI==&lt;br /&gt;
An introduction to the statistical foundations of AI&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
The types of AI&lt;br /&gt;
&lt;br /&gt;
Deterministic AI&lt;br /&gt;
Good old fashioned AI (GOFAI)&lt;br /&gt;
Basic stochastic AI&lt;br /&gt;
How regression works&lt;br /&gt;
Advanced stochastic AI&lt;br /&gt;
Neural networks and deep learning&lt;br /&gt;
Hybrid&lt;br /&gt;
Neurosymbolic AI&lt;br /&gt;
Background&lt;br /&gt;
Why machines will never rule the world, chapter 7 (chapter 8 of 2nd edition)&lt;br /&gt;
&lt;br /&gt;
==5.	Friday, May 8 (13:30 - 16:15) Are we living in a simulation==&lt;br /&gt;
Personal knowledge&lt;br /&gt;
&lt;br /&gt;
Explicit, implicit, practical, personal and tacit knowledge&lt;br /&gt;
Video&lt;br /&gt;
Knowing how vs Knowing that&lt;br /&gt;
Personal knowledge and science&lt;br /&gt;
Creativity&lt;br /&gt;
Empathy&lt;br /&gt;
Entrepreneurship&lt;br /&gt;
Leadership and control (and ruling the world)&lt;br /&gt;
Complex Systems and Cognitive Science: Why the Replication Problem is here to stay&lt;br /&gt;
&lt;br /&gt;
The &#039;replication problem&#039; is the the inability of scientific communities to independently confirm the results of scientific work. Much has been written on this problem especially as it arises in (social) psychology, and on potential solutions under the heading of &#039;open science&#039;. But we will see that the replication problem has plagued medicine as a positive science since its beginnings (Virchov and Pasteur). This problem has become worse over the last 30 years and has massive consequences for healthcare practice and policy.&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
==Friday May 2 (13:30-16:30) Are We Living in a Simulation?==&lt;br /&gt;
Are we living in a simulation?, Slides&lt;br /&gt;
Video&lt;br /&gt;
The Future of Artificial Intelligence, Slides&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Background Material&lt;br /&gt;
An Introduction to AI for Philosophers&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
Slides&lt;br /&gt;
(AI experts are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
An Introduction to Philosophy for Computer Scientists&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
Slides&lt;br /&gt;
(Philosophers are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
John McCarthy, &amp;quot;What has AI in common with philosophy?&amp;quot;&lt;/div&gt;</summary>
		<author><name>Phismith</name></author>
	</entry>
	<entry>
		<id>https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75524</id>
		<title>Philosophy and Artificial Intelligence 2026</title>
		<link rel="alternate" type="text/html" href="https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75524"/>
		<updated>2026-02-20T00:55:13Z</updated>

		<summary type="html">&lt;p&gt;Phismith: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&#039;&#039;&#039;Philosophy and Artificial Intelligence 2026&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe and Barry Smith&lt;br /&gt;
&lt;br /&gt;
MAP, USI, Lugano, Spring 2026&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;DRAFT VERSION&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Venue: Room A23&lt;br /&gt;
Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction&lt;br /&gt;
Tuesday May 5 (09:30-12:15) Transhumanism as the Ultimate Stage of Cartesianisme&lt;br /&gt;
Wednesday, May 6 (09:30 - 12:15) AI and the Theory of Dynamic and Complex Systems&lt;br /&gt;
Thursday, May 7 20 (09:30 - 12:15) AI and History, Geography and Law&lt;br /&gt;
Friday, May 8 (13:30 - 16:15) The Impossibility of Artificial General Intelligence &lt;br /&gt;
Tuesday, May 12 (09:30 - 12:15) AI and Creativity&lt;br /&gt;
Wednesday May 13 (09:30 - 12:15) Artificial Intelligence and Human Intelligence&lt;br /&gt;
Friday May 15 (09:30-12:15) The Limits of AI and the Limits of Physics, Part 2&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Introduction&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Artificial Intelligence (AI) is the subfield of Computer Science devoted to developing programs that enable computers to display behavior that can (broadly) be characterized as intelligent. On the strong version, the ultimate goal of AI is to create what is called General Artificial Intelligence (AGI), by which is meant an artificial system that is as intelligent as a human being.&lt;br /&gt;
&lt;br /&gt;
Since its inception in the middle of the last century AI has enjoyed repeated cycles of enthusiasm and disappointment (AI summers and winters). Recent successes of ChatGPT and other Large Language Models (LLMs) have opened a new era of popularization of AI. For the first time, AI tools have been created which are immediately available to the wider population, who for the first time can have real hands-on experience of what AI can do.&lt;br /&gt;
&lt;br /&gt;
These developments in AI open up a series of questions such as:&lt;br /&gt;
&lt;br /&gt;
:Will the powers of AI continue to grow in the future, and if so will they ever reach the point where they can be said to have intelligence equivalent to or greater than that of a human being?&lt;br /&gt;
:Could we ever reach the point where we can accept the thesis that an AI system could have something like consciousness or sentience?&lt;br /&gt;
:Could we reach the point where an AI system could be said to behave ethically, or to have responsibility for its actions?&lt;br /&gt;
:Can quantum computers enable a stronger AI than what we have today?&lt;br /&gt;
:Can a computer have desires, a will, and emotions?&lt;br /&gt;
:Can a computer have responsibility for its behavior?&lt;br /&gt;
:Could a machine have something like a personal identity? Would I really survive if the contents of my brain were uploaded to the cloud?&lt;br /&gt;
&lt;br /&gt;
We will describe in detail how stochastic AI works, and consider these and a series of other questions at the borderlines of philosophy and AI. The class will close with presentations of papers on relevant topics given by students.&lt;br /&gt;
&lt;br /&gt;
Some of the material for this class is derived from our book&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Why Machines Will Never Rule the World: Artificial Intelligence without Fear&#039;&#039; (2nd edition, Routledge 2025).&lt;br /&gt;
and from the companion volume:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Symposium on Why Machines Will Never Rule the World&#039;&#039; — Guest editor, Janna Hastings, University of Zurich&lt;br /&gt;
which appeared as a special issue of the public access journal Cosmos + Taxis in early 2024.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Faculty&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe is the founder and CEO of Cognotekt, GmBH, an AI company based in Cologne specialised in the design and implementation of holistic AI solutions. He has 20 years experience in the AI field, 8 years as a management consultant and software architect. He has also worked as a physician and mathematician, and he views AI itself -- to the extent that it is not an elaborate hype -- as a branch of applied mathematics. CUrrently his primary focus is in the biomathematics of cancer.&lt;br /&gt;
&lt;br /&gt;
Barry Smith is one of the world&#039;s most widely cited philosophers. He has contributed primarily to the field of applied ontology, which means applying philosophical ideas derived from analytical metaphysics to the concrete practical problems which arise where attempts are made to compare or combine heterogeneous bodies of data.&lt;br /&gt;
&lt;br /&gt;
Grading&lt;br /&gt;
&lt;br /&gt;
Essay with presentation: 70%&lt;br /&gt;
Essay with no presentation: 85%&lt;br /&gt;
Presentation: 15%&lt;br /&gt;
Class Participation 15%&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Draft Schedule&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
:1.	Monday, May 4 (13:30-16:15) Introduction&lt;br /&gt;
:2.	Tuesday May 5 (09:30-12:15) Limits and Dangers of AI?&lt;br /&gt;
:3.	Wednesday, May 6 (09:30 - 12:15) Transhumanism and digital immortality&lt;br /&gt;
:4.	Thursday, May 7 20 (09:30 - 12:15) Can a machine be conscious?&lt;br /&gt;
:5.	Friday, May 8 (13:30 - 16:15) Are we living in a simulation&lt;br /&gt;
:6.	Tuesday, May 12 (09:30 - 12:15) An introduction to the statistical foundations of AI&lt;br /&gt;
:7.	Wednesday May 13 (09:30 - 12:15) Explicit, implicit, practical, personal and tacit knowledge&lt;br /&gt;
:8.	Friday May 15 (09:30-12:15) Are We Living in a Simulation?&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Program&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==1. Monday, May 4 (13:30-16:15) Introduction==&lt;br /&gt;
Part 1: An introduction to the course: Impact of philosophy on AI; impact of AI on philosophy&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
Video&lt;br /&gt;
Why Machines Will Never Rule the World&lt;br /&gt;
Part 2: What are the essential marks of human intelligence?&lt;br /&gt;
&lt;br /&gt;
The classical psychological definitions of intelligence are:  &lt;br /&gt;
&lt;br /&gt;
A. the ability to adapt to new situations (applies both to humans and to animals) &lt;br /&gt;
B. a very general mental capability (possessed only by humans) that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience &lt;br /&gt;
Can a machine be intelligent in either of these senses?&lt;br /&gt;
&lt;br /&gt;
Slides on IQ tests&lt;br /&gt;
Readings:&lt;br /&gt;
&lt;br /&gt;
Linda S. Gottfredson. Mainstream Science on Intelligence. In: Intelligence 24 (1997), pp. 13–23.&lt;br /&gt;
Jobst Landgrebe and Barry Smith: There is no Artificial General Intelligence&lt;br /&gt;
Jobst Landgrebe: Deep reasoning, abstraction and planning&lt;br /&gt;
Background: Ersatz Definitions, Anthropomorphisms, and Pareidolia&lt;br /&gt;
&lt;br /&gt;
There&#039;s no &#039;I&#039; in &#039;AI&#039;, Steven Pemberton, Amsterdam, December 12, 2024&lt;br /&gt;
1. Esatz definitions: using words like &#039;thinks&#039; as in &#039;the machine is thinking&#039;, but with meanings quite different from those we use when talking about human beings. As when we define &#039;flying&#039; as moving through the air, and then jumping up and down and saying &amp;quot;look, I&#039;m flying!&amp;quot;&lt;br /&gt;
2. Pareidolia: a psychological phenomenon that causes people to see patterns, objects, or meaning in ambiguous or unrelated stimuli&lt;br /&gt;
3. If you can&#039;t spot irony, you&#039;re not intelligent&lt;br /&gt;
Tuesday February 18 (09:30-12:15) Limits and Dangers of AI?&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the technical fundamentals of AI: Methods, mathematics, usage as well&lt;br /&gt;
&lt;br /&gt;
2. Outlines the theory of complex systems documented in our book&lt;br /&gt;
&lt;br /&gt;
3. Shows why AI cannot model complex systems adequately and synoptically, and why they therefore cannot reach a level of intelligence equal to that of human beings.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Will AI Destroy Humanity? A Soho Forum Debate (Spoiler: Jobst won)&lt;br /&gt;
&lt;br /&gt;
R.V. Yampolskii, AI: Unexplainable, Unpredictable, Uncontrollable&lt;br /&gt;
&lt;br /&gt;
Arvind Narayanan and Sayash Kapoor, AI Snake Oil&lt;br /&gt;
&lt;br /&gt;
Arnold Schelsky, The Hype Book, especially Chapter 1.&lt;br /&gt;
&lt;br /&gt;
==2	Tuesday May 5 (09:30-12:15) Limits and Dangers of AI? ==&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the full spectrum of transhumanism and its cultural origins.&lt;br /&gt;
&lt;br /&gt;
2. Debunk the feasibility of radically improving human beings via technology.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
TESCREALISM, or: why AI gods are so passionate about creating Artificial General Intelligence&lt;br /&gt;
Considering the existential risk of Artificial Superintelligence&lt;br /&gt;
Thursday, February 20 (9:30 - 12:15) Can a machine be conscious?&lt;br /&gt;
The machine will&lt;br /&gt;
&lt;br /&gt;
Computers cannot have a will, because computers don&#039;t give a damn. Therefore there can be no machine ethics&lt;br /&gt;
&lt;br /&gt;
The lack of the giving-a-damn-factor is taken by Yann LeCun as a reason to reject the idea that AI might pose an existential risk to humanity – an AI will have no desire for self-preservation “Almost half of CEOs fear A.I. could destroy humanity five to 10 years from now — but ‘A.I. godfather&#039; says an existential threat is ‘preposterously ridiculous’” Fortune, June 15, 2023. See also here.&lt;br /&gt;
Implications of the absence of a machine will:&lt;br /&gt;
&lt;br /&gt;
The problem of the singularity (when machines will take over from humans) will not arise&lt;br /&gt;
The idea of digital immortality will never be realized Slides&lt;br /&gt;
The idea that human beings are simulations can be rejected&lt;br /&gt;
There can be no AI ethics (only: ethics governing human beings when they use AI)&lt;br /&gt;
Fermi&#039;s paradox is solved&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Searle&#039;s Chinese Room Argument&lt;br /&gt;
Machines cannot have intentionality; they cannot have experiences which are about something.&lt;br /&gt;
&lt;br /&gt;
Searle: Minds, Brains, and Programs&lt;br /&gt;
&lt;br /&gt;
==3.	Wednesday, May 6 (09:30 - 12:15) Transhumanism and digital immortality Are we living in a simulation?==&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
The Fermi Paradox&lt;br /&gt;
&lt;br /&gt;
Bostrom&#039;s Simulation Argument&lt;br /&gt;
&lt;br /&gt;
Background&lt;br /&gt;
&lt;br /&gt;
David Chalmers, Reality+&lt;br /&gt;
&lt;br /&gt;
Dialog with Chalmers avatar&lt;br /&gt;
&lt;br /&gt;
==4. Thursday, May 7 (09:30 - 12:15) Can a machine be conscious? An introduction to the statistical foundations of AI==&lt;br /&gt;
An introduction to the statistical foundations of AI&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
The types of AI&lt;br /&gt;
&lt;br /&gt;
Deterministic AI&lt;br /&gt;
Good old fashioned AI (GOFAI)&lt;br /&gt;
Basic stochastic AI&lt;br /&gt;
How regression works&lt;br /&gt;
Advanced stochastic AI&lt;br /&gt;
Neural networks and deep learning&lt;br /&gt;
Hybrid&lt;br /&gt;
Neurosymbolic AI&lt;br /&gt;
Background&lt;br /&gt;
Why machines will never rule the world, chapter 7 (chapter 8 of 2nd edition)&lt;br /&gt;
&lt;br /&gt;
==5.	Friday, May 8 (13:30 - 16:15) Are we living in a simulation==&lt;br /&gt;
Personal knowledge&lt;br /&gt;
&lt;br /&gt;
Explicit, implicit, practical, personal and tacit knowledge&lt;br /&gt;
Video&lt;br /&gt;
Knowing how vs Knowing that&lt;br /&gt;
Personal knowledge and science&lt;br /&gt;
Creativity&lt;br /&gt;
Empathy&lt;br /&gt;
Entrepreneurship&lt;br /&gt;
Leadership and control (and ruling the world)&lt;br /&gt;
Complex Systems and Cognitive Science: Why the Replication Problem is here to stay&lt;br /&gt;
&lt;br /&gt;
The &#039;replication problem&#039; is the the inability of scientific communities to independently confirm the results of scientific work. Much has been written on this problem especially as it arises in (social) psychology, and on potential solutions under the heading of &#039;open science&#039;. But we will see that the replication problem has plagued medicine as a positive science since its beginnings (Virchov and Pasteur). This problem has become worse over the last 30 years and has massive consequences for healthcare practice and policy.&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
==Friday May 2 (13:30-16:30) Are We Living in a Simulation?==&lt;br /&gt;
Are we living in a simulation?, Slides&lt;br /&gt;
Video&lt;br /&gt;
The Future of Artificial Intelligence, Slides&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Background Material&lt;br /&gt;
An Introduction to AI for Philosophers&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
Slides&lt;br /&gt;
(AI experts are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
An Introduction to Philosophy for Computer Scientists&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
Slides&lt;br /&gt;
(Philosophers are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
John McCarthy, &amp;quot;What has AI in common with philosophy?&amp;quot;&lt;/div&gt;</summary>
		<author><name>Phismith</name></author>
	</entry>
	<entry>
		<id>https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75523</id>
		<title>Philosophy and Artificial Intelligence 2026</title>
		<link rel="alternate" type="text/html" href="https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75523"/>
		<updated>2026-02-20T00:53:43Z</updated>

		<summary type="html">&lt;p&gt;Phismith: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&#039;&#039;&#039;Philosophy and Artificial Intelligence 2026&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe and Barry Smith&lt;br /&gt;
&lt;br /&gt;
MAP, USI, Lugano, Spring 2026&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;DRAFT VERSION&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Venue: Room A23&lt;br /&gt;
Monday, May 4 (13:30-16:15) AI and Philosophy: An Introduction&lt;br /&gt;
Tuesday May 5 (09:30-12:15) Transhumanism as the Ultimate Stage of Cartesianisme&lt;br /&gt;
Wednesday, May 6 (09:30 - 12:15) AI and the Theory of Dynamic and Complex Systems&lt;br /&gt;
Thursday, May 7 20 (09:30 - 12:15) AI and History, Geography and Law&lt;br /&gt;
Friday, May 8 (13:30 - 16:15) Explicit, Implicit, Practical, Personal, Tacit and Collective Knowledge &lt;br /&gt;
Tuesday, May 12 (09:30 - 12:15) AI and Creativity&lt;br /&gt;
Wednesday May 13 (09:30 - 12:15) Artificial Intelligence and Human Intelligence; &lt;br /&gt;
Friday May 15 (09:30-12:15) The Limits of AI and the Limits of Physics, Part 2&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Introduction&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Artificial Intelligence (AI) is the subfield of Computer Science devoted to developing programs that enable computers to display behavior that can (broadly) be characterized as intelligent. On the strong version, the ultimate goal of AI is to create what is called General Artificial Intelligence (AGI), by which is meant an artificial system that is as intelligent as a human being.&lt;br /&gt;
&lt;br /&gt;
Since its inception in the middle of the last century AI has enjoyed repeated cycles of enthusiasm and disappointment (AI summers and winters). Recent successes of ChatGPT and other Large Language Models (LLMs) have opened a new era of popularization of AI. For the first time, AI tools have been created which are immediately available to the wider population, who for the first time can have real hands-on experience of what AI can do.&lt;br /&gt;
&lt;br /&gt;
These developments in AI open up a series of questions such as:&lt;br /&gt;
&lt;br /&gt;
:Will the powers of AI continue to grow in the future, and if so will they ever reach the point where they can be said to have intelligence equivalent to or greater than that of a human being?&lt;br /&gt;
:Could we ever reach the point where we can accept the thesis that an AI system could have something like consciousness or sentience?&lt;br /&gt;
:Could we reach the point where an AI system could be said to behave ethically, or to have responsibility for its actions?&lt;br /&gt;
:Can quantum computers enable a stronger AI than what we have today?&lt;br /&gt;
:Can a computer have desires, a will, and emotions?&lt;br /&gt;
:Can a computer have responsibility for its behavior?&lt;br /&gt;
:Could a machine have something like a personal identity? Would I really survive if the contents of my brain were uploaded to the cloud?&lt;br /&gt;
&lt;br /&gt;
We will describe in detail how stochastic AI works, and consider these and a series of other questions at the borderlines of philosophy and AI. The class will close with presentations of papers on relevant topics given by students.&lt;br /&gt;
&lt;br /&gt;
Some of the material for this class is derived from our book&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Why Machines Will Never Rule the World: Artificial Intelligence without Fear&#039;&#039; (2nd edition, Routledge 2025).&lt;br /&gt;
and from the companion volume:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Symposium on Why Machines Will Never Rule the World&#039;&#039; — Guest editor, Janna Hastings, University of Zurich&lt;br /&gt;
which appeared as a special issue of the public access journal Cosmos + Taxis in early 2024.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Faculty&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe is the founder and CEO of Cognotekt, GmBH, an AI company based in Cologne specialised in the design and implementation of holistic AI solutions. He has 20 years experience in the AI field, 8 years as a management consultant and software architect. He has also worked as a physician and mathematician, and he views AI itself -- to the extent that it is not an elaborate hype -- as a branch of applied mathematics. CUrrently his primary focus is in the biomathematics of cancer.&lt;br /&gt;
&lt;br /&gt;
Barry Smith is one of the world&#039;s most widely cited philosophers. He has contributed primarily to the field of applied ontology, which means applying philosophical ideas derived from analytical metaphysics to the concrete practical problems which arise where attempts are made to compare or combine heterogeneous bodies of data.&lt;br /&gt;
&lt;br /&gt;
Grading&lt;br /&gt;
&lt;br /&gt;
Essay with presentation: 70%&lt;br /&gt;
Essay with no presentation: 85%&lt;br /&gt;
Presentation: 15%&lt;br /&gt;
Class Participation 15%&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Draft Schedule&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
:1.	Monday, May 4 (13:30-16:15) Introduction&lt;br /&gt;
:2.	Tuesday May 5 (09:30-12:15) Limits and Dangers of AI?&lt;br /&gt;
:3.	Wednesday, May 6 (09:30 - 12:15) Transhumanism and digital immortality&lt;br /&gt;
:4.	Thursday, May 7 20 (09:30 - 12:15) Can a machine be conscious?&lt;br /&gt;
:5.	Friday, May 8 (13:30 - 16:15) Are we living in a simulation&lt;br /&gt;
:6.	Tuesday, May 12 (09:30 - 12:15) An introduction to the statistical foundations of AI&lt;br /&gt;
:7.	Wednesday May 13 (09:30 - 12:15) Explicit, implicit, practical, personal and tacit knowledge&lt;br /&gt;
:8.	Friday May 15 (09:30-12:15) Are We Living in a Simulation?&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Program&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==1. Monday, May 4 (13:30-16:15) Introduction==&lt;br /&gt;
Part 1: An introduction to the course: Impact of philosophy on AI; impact of AI on philosophy&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
Video&lt;br /&gt;
Why Machines Will Never Rule the World&lt;br /&gt;
Part 2: What are the essential marks of human intelligence?&lt;br /&gt;
&lt;br /&gt;
The classical psychological definitions of intelligence are:  &lt;br /&gt;
&lt;br /&gt;
A. the ability to adapt to new situations (applies both to humans and to animals) &lt;br /&gt;
B. a very general mental capability (possessed only by humans) that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience &lt;br /&gt;
Can a machine be intelligent in either of these senses?&lt;br /&gt;
&lt;br /&gt;
Slides on IQ tests&lt;br /&gt;
Readings:&lt;br /&gt;
&lt;br /&gt;
Linda S. Gottfredson. Mainstream Science on Intelligence. In: Intelligence 24 (1997), pp. 13–23.&lt;br /&gt;
Jobst Landgrebe and Barry Smith: There is no Artificial General Intelligence&lt;br /&gt;
Jobst Landgrebe: Deep reasoning, abstraction and planning&lt;br /&gt;
Background: Ersatz Definitions, Anthropomorphisms, and Pareidolia&lt;br /&gt;
&lt;br /&gt;
There&#039;s no &#039;I&#039; in &#039;AI&#039;, Steven Pemberton, Amsterdam, December 12, 2024&lt;br /&gt;
1. Esatz definitions: using words like &#039;thinks&#039; as in &#039;the machine is thinking&#039;, but with meanings quite different from those we use when talking about human beings. As when we define &#039;flying&#039; as moving through the air, and then jumping up and down and saying &amp;quot;look, I&#039;m flying!&amp;quot;&lt;br /&gt;
2. Pareidolia: a psychological phenomenon that causes people to see patterns, objects, or meaning in ambiguous or unrelated stimuli&lt;br /&gt;
3. If you can&#039;t spot irony, you&#039;re not intelligent&lt;br /&gt;
Tuesday February 18 (09:30-12:15) Limits and Dangers of AI?&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the technical fundamentals of AI: Methods, mathematics, usage as well&lt;br /&gt;
&lt;br /&gt;
2. Outlines the theory of complex systems documented in our book&lt;br /&gt;
&lt;br /&gt;
3. Shows why AI cannot model complex systems adequately and synoptically, and why they therefore cannot reach a level of intelligence equal to that of human beings.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Will AI Destroy Humanity? A Soho Forum Debate (Spoiler: Jobst won)&lt;br /&gt;
&lt;br /&gt;
R.V. Yampolskii, AI: Unexplainable, Unpredictable, Uncontrollable&lt;br /&gt;
&lt;br /&gt;
Arvind Narayanan and Sayash Kapoor, AI Snake Oil&lt;br /&gt;
&lt;br /&gt;
Arnold Schelsky, The Hype Book, especially Chapter 1.&lt;br /&gt;
&lt;br /&gt;
==2	Tuesday May 5 (09:30-12:15) Limits and Dangers of AI? ==&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the full spectrum of transhumanism and its cultural origins.&lt;br /&gt;
&lt;br /&gt;
2. Debunk the feasibility of radically improving human beings via technology.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
TESCREALISM, or: why AI gods are so passionate about creating Artificial General Intelligence&lt;br /&gt;
Considering the existential risk of Artificial Superintelligence&lt;br /&gt;
Thursday, February 20 (9:30 - 12:15) Can a machine be conscious?&lt;br /&gt;
The machine will&lt;br /&gt;
&lt;br /&gt;
Computers cannot have a will, because computers don&#039;t give a damn. Therefore there can be no machine ethics&lt;br /&gt;
&lt;br /&gt;
The lack of the giving-a-damn-factor is taken by Yann LeCun as a reason to reject the idea that AI might pose an existential risk to humanity – an AI will have no desire for self-preservation “Almost half of CEOs fear A.I. could destroy humanity five to 10 years from now — but ‘A.I. godfather&#039; says an existential threat is ‘preposterously ridiculous’” Fortune, June 15, 2023. See also here.&lt;br /&gt;
Implications of the absence of a machine will:&lt;br /&gt;
&lt;br /&gt;
The problem of the singularity (when machines will take over from humans) will not arise&lt;br /&gt;
The idea of digital immortality will never be realized Slides&lt;br /&gt;
The idea that human beings are simulations can be rejected&lt;br /&gt;
There can be no AI ethics (only: ethics governing human beings when they use AI)&lt;br /&gt;
Fermi&#039;s paradox is solved&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Searle&#039;s Chinese Room Argument&lt;br /&gt;
Machines cannot have intentionality; they cannot have experiences which are about something.&lt;br /&gt;
&lt;br /&gt;
Searle: Minds, Brains, and Programs&lt;br /&gt;
&lt;br /&gt;
==3.	Wednesday, May 6 (09:30 - 12:15) Transhumanism and digital immortality Are we living in a simulation?==&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
The Fermi Paradox&lt;br /&gt;
&lt;br /&gt;
Bostrom&#039;s Simulation Argument&lt;br /&gt;
&lt;br /&gt;
Background&lt;br /&gt;
&lt;br /&gt;
David Chalmers, Reality+&lt;br /&gt;
&lt;br /&gt;
Dialog with Chalmers avatar&lt;br /&gt;
&lt;br /&gt;
==4. Thursday, May 7 (09:30 - 12:15) Can a machine be conscious? An introduction to the statistical foundations of AI==&lt;br /&gt;
An introduction to the statistical foundations of AI&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
The types of AI&lt;br /&gt;
&lt;br /&gt;
Deterministic AI&lt;br /&gt;
Good old fashioned AI (GOFAI)&lt;br /&gt;
Basic stochastic AI&lt;br /&gt;
How regression works&lt;br /&gt;
Advanced stochastic AI&lt;br /&gt;
Neural networks and deep learning&lt;br /&gt;
Hybrid&lt;br /&gt;
Neurosymbolic AI&lt;br /&gt;
Background&lt;br /&gt;
Why machines will never rule the world, chapter 7 (chapter 8 of 2nd edition)&lt;br /&gt;
&lt;br /&gt;
==5.	Friday, May 8 (13:30 - 16:15) Are we living in a simulation==&lt;br /&gt;
Personal knowledge&lt;br /&gt;
&lt;br /&gt;
Explicit, implicit, practical, personal and tacit knowledge&lt;br /&gt;
Video&lt;br /&gt;
Knowing how vs Knowing that&lt;br /&gt;
Personal knowledge and science&lt;br /&gt;
Creativity&lt;br /&gt;
Empathy&lt;br /&gt;
Entrepreneurship&lt;br /&gt;
Leadership and control (and ruling the world)&lt;br /&gt;
Complex Systems and Cognitive Science: Why the Replication Problem is here to stay&lt;br /&gt;
&lt;br /&gt;
The &#039;replication problem&#039; is the the inability of scientific communities to independently confirm the results of scientific work. Much has been written on this problem especially as it arises in (social) psychology, and on potential solutions under the heading of &#039;open science&#039;. But we will see that the replication problem has plagued medicine as a positive science since its beginnings (Virchov and Pasteur). This problem has become worse over the last 30 years and has massive consequences for healthcare practice and policy.&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
==Friday May 2 (13:30-16:30) Are We Living in a Simulation?==&lt;br /&gt;
Are we living in a simulation?, Slides&lt;br /&gt;
Video&lt;br /&gt;
The Future of Artificial Intelligence, Slides&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Background Material&lt;br /&gt;
An Introduction to AI for Philosophers&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
Slides&lt;br /&gt;
(AI experts are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
An Introduction to Philosophy for Computer Scientists&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
Slides&lt;br /&gt;
(Philosophers are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
John McCarthy, &amp;quot;What has AI in common with philosophy?&amp;quot;&lt;/div&gt;</summary>
		<author><name>Phismith</name></author>
	</entry>
	<entry>
		<id>https://ncorwiki.buffalo.edu/index.php?title=Ontology_and_Artificial_Intelligence_-_Fall_2025&amp;diff=75522</id>
		<title>Ontology and Artificial Intelligence - Fall 2025</title>
		<link rel="alternate" type="text/html" href="https://ncorwiki.buffalo.edu/index.php?title=Ontology_and_Artificial_Intelligence_-_Fall_2025&amp;diff=75522"/>
		<updated>2026-01-21T13:31:29Z</updated>

		<summary type="html">&lt;p&gt;Phismith: /* Monday, September 22 (4:30 - 16:15) Machine Consciounsess, Transhumanism, and Ecological Psychology */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Department of Philosophy, University at Buffalo [[Ontology and AI]]&lt;br /&gt;
&lt;br /&gt;
Fall 2025 - PHI609SEM-SMI2 - Special Topics: Ontology and Artificial Intelligence - Class Number 24371&lt;br /&gt;
&lt;br /&gt;
Faculty: [https://ontology.buffalo.edu/Smith Barry Smith]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Hybrid&#039;&#039;&#039;&lt;br /&gt;
: in person: Monday 4-5:50pm, 141 Park Hall&lt;br /&gt;
: remote synchronous: Monday 4-5:50pm; dial-in details will be supplied by emaail&lt;br /&gt;
: remote asynchronous: dial-in details will be supplied by email; must attend synchronously (either online or in person) on December 8&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Grading&#039;&#039;&#039;&lt;br /&gt;
All enrolled students must submit to BS a Starting Draft version of their essay by November 10 at the latest. They must must a full version of their essay and of the associated powerpoint deck by December 8. &lt;br /&gt;
&lt;br /&gt;
Word length requirements are as follows:&lt;br /&gt;
&lt;br /&gt;
:PhD candidates: &lt;br /&gt;
::2 credit hours: 2000 words / starting draft: 1000 words&lt;br /&gt;
::3 credit hours: 2000 + 3000 words / starting draft: 1000 + 1000 words&lt;br /&gt;
:Masters candidates:&lt;br /&gt;
::2 credit hours: 1500 words /starting draft: 750 words&lt;br /&gt;
::3 credit hours: 1500 + 2000 words / starting draft: 750 + 750 words&lt;br /&gt;
:Undergraduate candidates&lt;br /&gt;
::2 credit hours: 1000 words / starting draft: 500 words&lt;br /&gt;
::3 credit hours: 1500 words / 500 + 500 words&lt;br /&gt;
&lt;br /&gt;
3 credit hour candidates may submit a single essay provided it its length conforms to the combined limits listed above.&lt;br /&gt;
&lt;br /&gt;
Starting draft should be your own work. No use of LLMs. All candidates are, however, welcome to use ChatGPT on polishing their starting drafts, providing that they follow the rules set forth here: &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Grading for 2 Credit Hours Course (PhD candidates)&#039;&#039;&#039;&lt;br /&gt;
:Essay (at least 2000 words): 40%&lt;br /&gt;
:Presentation (and accompanying powerpoint deck) on December 8: 40%&lt;br /&gt;
:Class Participation (for in person and remote synchronous students) 20%&lt;br /&gt;
:Oral exam (for remote asynchronous students) 20%&lt;br /&gt;
&lt;br /&gt;
Essays may include software code and internet portal or database content where relevant.&lt;br /&gt;
&lt;br /&gt;
Students taking this course for 3 credit hours will be required to prepare an additional essay of 3000 words, together with class presentation and powerpoint deck. The total contribution for these two essays is 40%.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Policy on use of AI&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
There are two options:&lt;br /&gt;
&lt;br /&gt;
Option 1: Include a declaration on p. 1 to the effect that the essay was written entirely without&lt;br /&gt;
any sort of AI assistance. I reserve the right to use software tools, but also my own judgment, to&lt;br /&gt;
ensure this draft was written by you. Grades under option 1. will be determined by the quality&lt;br /&gt;
of your essay.&lt;br /&gt;
&lt;br /&gt;
Option 2 is in three steps:&lt;br /&gt;
:Step 1. Create a draft in your own words of an essay that is about half as long as your target length length. This should be a substantive draft, but it can contain for example rough notes pointing to further lines of development. Not only this initial draft, but also all further steps in the list below, should rely on study by you of the relevant literature. Both your draft and your final essay should accordingly contain lists of references.&lt;br /&gt;
:Step 2. Submit this draft to me at phismith@buffalo.edu by the middle of the semester.&lt;br /&gt;
:Step 3. You create a new prompt using your draft as an attachment with an instruction such as: &#039;&#039;show me how I can improve the attached&#039;&#039;. This will start a potentially long process of improvements in your essay incorporating further contributions from you together with assistance from the LLM. You should attempt to use prompts to manipulate the style of the LLM output in a direction of a style appropriate to serious academic research, with references, quotations, definitions, as needed. Most importantly: you should be aware that LLMs often make errors (called &#039;hallucinations&#039;), for example inventing references in the literature which do not in fact exist.&lt;br /&gt;
:Step 4. the LLM has been keeping track of everything you tell it to do since you started the newchat. When you think you might be ready to submit, use the LLM save function to generate a URI linking to all the interactions thus far – effectively a log of your process. This log, together with your initial and final essay will for part of what will be evaluated for your grade.&lt;br /&gt;
:Step 5. When you truly are ready to submit, press save for one last time and take a note of the link; send me this link, together with your completed essay, and with any notes on features of the log you which to point out -- for example requests that I ignore specific chains of prompts because they proved to be dead ends.&lt;br /&gt;
Grades under Option 2 will be determined on the basis of (a) originality of the initial draft, (b) creativity of your prompts, (c) quality of final essay.&lt;br /&gt;
&lt;br /&gt;
Attendance at the synchronous session on December 8, featuring student presentations, is compulsory for all students&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Introduction&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
Ontology (also called &#039;metaphysics&#039;) is a subfield of philosophy which aims to establich the kinds of entities in the world -- including both the material and the mental world -- and the relations between them. Applied ontology applies philosophical ideas and methods to support those who are collecting, using, comparing, refining, evaluating or (today above all) generating data.&lt;br /&gt;
&lt;br /&gt;
Artificial Intelligence (AI) is the subfield of Computer Science devoted to developing programs that enable computers to display behavior that can (broadly) be characterized as &#039;intelligent&#039;. On the strong version, the ultimate goal of AI is to create what is called &#039;&#039;General Artificial Intelligence&#039;&#039; (AGI), by which is meant an artificial system that is as intelligent as a human being. ChatGT and other large language models (LLMs) attempt to generate data from other data, where the latter are obtained for example by crawling the internet. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Required reading&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;[https://buffalo.box.com/v/Why-machines-1e Why Machines Will Never Rule the World: Artificial Intelligence without Fear]&#039;&#039; (Routledge 2022; revised and enlarged [https://buffalo.box.com/v/Second-edition 2nd edition] published in 2025). &lt;br /&gt;
&lt;br /&gt;
See also offer [https://www.routledge.com/Why-Machines-Will-Never-Rule-the-World-Artificial-Intelligence-without-Fear/Landgrebe-Smith/p/book/9781032941400?srsltid=AfmBOoor0YJakTv88G0LUq0tWvBh3YS604AK0Gfr8Bd0YYgsgq1U6J7y here] &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Draft Schedule&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==Monday, August 25 (4:00-5:50pm) The Glory and the Misery of Large Language Models==&lt;br /&gt;
&lt;br /&gt;
We will provide a brief introduction to Large Language Models such as ChatGPT. Focusing not only on positive but also on negative aspects of how they work.&lt;br /&gt;
&lt;br /&gt;
:[https://www.youtube.com/watch?v=JMD_1yA3TXk Video1]&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Glory-and-Misery Slides1]&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/s/ufnf1gwozzzd3hpmcmmbz2j7l0dzht56 Transcript]&lt;br /&gt;
&lt;br /&gt;
GPT-5 and the French and Indian War:&lt;br /&gt;
&lt;br /&gt;
:[https://www.youtube.com/watch?v=Lm4mCgAsI6I Video2]&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/French-and-Indian-War Slides2]&lt;br /&gt;
&lt;br /&gt;
:Summary of the argument of &#039;&#039;Why Machines Will Never Rule the World&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:What does &#039;stochastic&#039; mean in &#039;stochastic AI&#039;&lt;br /&gt;
&lt;br /&gt;
:What is &#039;scaling&#039;&lt;br /&gt;
&lt;br /&gt;
:What are hallucinations?&lt;br /&gt;
&lt;br /&gt;
:Teach yourself history with ChatGPT&lt;br /&gt;
&lt;br /&gt;
==Monday, September 1 NO CLASS: LABOR DAY==&lt;br /&gt;
&lt;br /&gt;
==Monday, September 8 (4:00-5:50pm) Ontology and the History of AI==&lt;br /&gt;
&lt;br /&gt;
Part 1: From Good Old Fashioned (Logical, Symbolic) AI to ChatGPT&lt;br /&gt;
&lt;br /&gt;
*[https://buffalo.box.com/v/Ontology-history Lecture]&lt;br /&gt;
*[https://buffalo.box.com/v/Slides-Lecture-2 Slides]&lt;br /&gt;
*[https://www.youtube.com/watch?v=zFyGDBtbVdc Short Youtube video on how ChatGPT fakes quotations]&lt;br /&gt;
&lt;br /&gt;
Since its inception in the last century, AI has enjoyed repeated cycles of enthusiasm and disappointment (AI summers and winters). Recent successes of ChatGPT and other Large Language Models (LLMs) have opened a new era of popularization of AI. For the first time, AI tools have been created that are immediately available to the wider population, who for the first time can have real hands-on experience of what AI can do. &lt;br /&gt;
&lt;br /&gt;
In this first lecture we will address the origins of AI in Stanford University in the 1970s and &#039;80s, and specifically in the work on common-sense ontology of Patrick Hayes and others.&lt;br /&gt;
&lt;br /&gt;
Topics to be deal with include:&lt;br /&gt;
&lt;br /&gt;
:What is ontology?&lt;br /&gt;
:From Aristotle to 20th century philosophical ontology&lt;br /&gt;
:Patrick Hayes, Naive Physics and ontology-based robotics&lt;br /&gt;
:Doug Lennat and the CYC (for &#039;enCYClopedia&#039; project)&lt;br /&gt;
:Why CYC failed&lt;br /&gt;
:Why ontology is still important to AI&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
:[https://buffalo.box.com/v/History-of-AI History of AI]&lt;br /&gt;
:[https://utt.hal.science/hal-02954862v1/document Where do ontologies come from?]&lt;br /&gt;
:See also references to Hayes in [https://www.physicalism.com/osr.pdf &#039;&#039;Everything must go&#039;&#039;]&lt;br /&gt;
&lt;br /&gt;
==Monday, September 15 (4:00-5:50pm) Limits of AI? ==&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Ontology-and-AI-Slides3 Slides]&lt;br /&gt;
:[https://buffalo.box.com/v/Ontology-and-AI-Video3 Video]&lt;br /&gt;
&lt;br /&gt;
1. Surveys the technical fundamentals of AI: Methods, mathematics, usage &lt;br /&gt;
&lt;br /&gt;
2. Natural and engineered systems&lt;br /&gt;
&lt;br /&gt;
3. The ontology of systems&lt;br /&gt;
&lt;br /&gt;
4. Complex systems &lt;br /&gt;
&lt;br /&gt;
5. The limits of Turing machines&lt;br /&gt;
&lt;br /&gt;
6. Why AI cannot model complex systems adequately and synoptically, and why they therefore cannot reach a level of intelligence equal to that of human beings.&lt;br /&gt;
&lt;br /&gt;
Conclusions: &lt;br /&gt;
:AI is a family of algorithms to automate repetitive events&lt;br /&gt;
:Deep neural networks have nothing to do with neurons&lt;br /&gt;
:AI is not artificial &#039;intelligence&#039;; it is a branch of mathematics in which the attempt is made to use the Turing machine to its limits by using gigantically large amounts of data&lt;br /&gt;
&lt;br /&gt;
Background reading:&lt;br /&gt;
:[https://www.nytimes.com/2025/09/03/opinion/ai-gpt5-rethinking.html?unlocked_article_code=1.jE8.eAwg.I1yx07GQmDbh&amp;amp;smid=nytcore-ios-share&amp;amp;referringSource=articleShare&amp;amp;utm_source=substack&amp;amp;utm_medium=email Marcus on superintelligence]&lt;br /&gt;
:https://www.wheresyoured.at/&lt;br /&gt;
:https://x.com/jobstlandgrebe?lang=en&lt;br /&gt;
:https://ontology.buffalo.edu/smith/&lt;br /&gt;
&lt;br /&gt;
==Monday, September 22 (4:30 - 16:15) Machine Consciounsess, Transhumanism, and Ecological Psychology==&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Ontology-and-AI-4-video Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Ontology-and-AI-4-Slides Slides]&lt;br /&gt;
&lt;br /&gt;
1. Jobst Landgrebe on mathematical definitions of consciousness&lt;br /&gt;
&lt;br /&gt;
2. Surveys the spectrum of transhumanism&lt;br /&gt;
&lt;br /&gt;
3. Debunks the feasibility of radically improving human beings via technology.&lt;br /&gt;
&lt;br /&gt;
4. Explains why Sam Altman and other AI gods are so passionate about creating Artificial General Intelligence&lt;br /&gt;
&lt;br /&gt;
5. J. J. Gibson, direct realism, and how our behavior is tuned to affordances&lt;br /&gt;
&lt;br /&gt;
Background: &lt;br /&gt;
:[https://www.truthdig.com/articles/the-acronym-behind-our-wildest-ai-dreams-and-nightmares/ TESCREALISM]&lt;br /&gt;
:[https://buffalo.box.com/v/Transhumanism-Mind-Body Transhumanism and the Mind-Body Problem]&lt;br /&gt;
&lt;br /&gt;
AI and the meaning of life:&lt;br /&gt;
:[http://ontology.buffalo.edu/smith/articles/Matrix.pdf AI and &#039;&#039;The Matrix&#039;&#039;]&lt;br /&gt;
:[https://buffalo.box.com/s/6knt5u23f8zloxydvzp5q3c1dzbmimkf There is no general AI]&lt;br /&gt;
:[https://buffalo.box.com/v/Transhumanism-Lugano-2025 Landgrebe on Transhumanism]&lt;br /&gt;
:[https://michaelnotebook.com/xriskbrief/index.html Considering the existential risk of Artificial Superintelligence]&lt;br /&gt;
:[https://buffalo.box.com/v/We-are-living-in-a-simulation Scott Adams: We are living in a simulation]&lt;br /&gt;
Ontology of the Eruv (why it would take all the fun out of real estate if everyone could live next door to John Lennon)&lt;br /&gt;
&lt;br /&gt;
Are we living in a simulation?&lt;br /&gt;
&lt;br /&gt;
:David Chalmers&#039; &#039;&#039;[https://www.amazon.com/Reality-Virtual-Worlds-Problems-Philosophy/dp/0393635805 Reality+]&#039;&#039; &lt;br /&gt;
:[https://buffalo.box.com/v/We-are-living-in-a-simulation Scott Adams: We are living in a simulation]&lt;br /&gt;
:[http://ontology.buffalo.edu/smith/articles/Matrix.pdf AI and &#039;&#039;The Matrix&#039;&#039;]&lt;br /&gt;
:[https://buffalo.box.com/s/6knt5u23f8zloxydvzp5q3c1dzbmimkf Slides]&lt;br /&gt;
:[https://buffalo.box.com/v/BS-Intelligence-Lugano-2025 Are we living in a simulation?]&lt;br /&gt;
:[https://buffalo.box.com/v/Living-in-a-Simulation On Chalmers on &#039;&#039;Reality&#039;&#039;+?]&lt;br /&gt;
:[https://buffalo.box.com/v/AI-in-the-Future The Future of Artificial Intelligence]&lt;br /&gt;
&lt;br /&gt;
Machine consciousness: Machines cannot have intentionality; they cannot have experiences which are &#039;&#039;about&#039;&#039; something.  &lt;br /&gt;
&lt;br /&gt;
Background&lt;br /&gt;
:[https://buffalo.box.com/v/BS-Lugano-Machine-Will Slides]&lt;br /&gt;
:[https://buffalo.box.com/v/Machine-Consciousness-BS-2025 Video]&lt;br /&gt;
:[https://www.youtube.com/watch?v=tt-JzB50sJE Searle&#039;s Chinese Room Argument]&lt;br /&gt;
:[https://www.law.upenn.edu/live/files/3413-searle-j-minds-brains-and-programs-1980.pdf Searle: Minds, Brains, and Programs] &lt;br /&gt;
:[https://arxiv.org/pdf/1901.02918 Making AI Meaningful Again]&lt;br /&gt;
:[https://aclanthology.org/2025.acl-long.1258.pdf Søgaard: Do Language Models Have Semantics?]&lt;br /&gt;
:[https://arxiv.org/pdf/2511.16582 Consciousness in Artificial Intelligence? A Framework for Classifying Objections and Constraints]&lt;br /&gt;
&lt;br /&gt;
==Monday, September 29 (4:00-5:50pm) AGI, Behavior Settings and Distributed Cognition==&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Lecture-5-Niches Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Lecture-5-Slides Slides]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Part 1. Question-and-answer session with Jérémy Ravenel of [https://home.naas.ai naas.ai]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Questions to be addressed include: &lt;br /&gt;
&lt;br /&gt;
:What are you doing with BFO and LLMs? &lt;br /&gt;
:Can you rely on BFO still being operative in the proper way even after a new release of an LLM?&lt;br /&gt;
&lt;br /&gt;
See also: [https://www.linkedin.com/posts/jeremyravenel_why-is-bfo-so-powerful-bfo-basic-formal-activity-7250607560976732163-d7tZ/ Why is BFO so powerful?]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Part 2. Niches and Intelligence&#039;&#039;&#039;&lt;br /&gt;
:Knowing how vs Knowing that&lt;br /&gt;
:Personal knowledge and science&lt;br /&gt;
:Creativity&lt;br /&gt;
:Empathy&lt;br /&gt;
:Entrepreneurship&lt;br /&gt;
:Leadership and control (and ruling the world)&lt;br /&gt;
&lt;br /&gt;
Background&lt;br /&gt;
:[https://buffalo.box.com/v/AI-and-Creativity Explicit, implicit, practical, personal and tacit knowledge]&lt;br /&gt;
:[https://buffalo.box.com/v/Practical-knowledge Personal knowledge]&lt;br /&gt;
&lt;br /&gt;
==Monday, October 6 (4:00-5:50pm) Towards a theory of intelligence ==&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/ontology-and-AI-slides-6 Slides]&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Lecture6-AI-Ontology Video]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Part 1. Definitions of intelligence &lt;br /&gt;
&lt;br /&gt;
:A. the ability to adapt to new situations (applies both to humans and to animals) &lt;br /&gt;
:B. a very general mental capability (possessed only by humans) that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience &lt;br /&gt;
&lt;br /&gt;
Can a machine be intelligent in either of these senses?&lt;br /&gt;
&lt;br /&gt;
Can a team be intelligent?&lt;br /&gt;
&lt;br /&gt;
See Ryan Muldoon, &amp;quot;Diversity and the Division of Cognitive Labor&amp;quot;, &#039;&#039;Philosophy Compass&#039;&#039; 8 (2):117-125 (2013) &lt;br /&gt;
&lt;br /&gt;
Can a team made of humans and AI systems be intelligent?&lt;br /&gt;
&lt;br /&gt;
See M. Stelmaszak et al., &amp;quot;Artificial Intelligence as an Organizing Capability Arising from Human-Algorithm Relations&amp;quot;, &#039;&#039;Journal of Management Studies&#039;&#039;, https://doi.org/10.1111/joms.70003&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Part 2. What do IQ tests measure?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/What-do-IQ-tests-2022 Slides on IQ tests]&lt;br /&gt;
&lt;br /&gt;
:[https://www.youtube.com/watch?v=BcyeAbcDDgg Human and animal intelligence]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Readings:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:Linda S. Gottfredson. [https://www1.udel.edu/educ/gottfredson/reprints/1994WSJmainstream.pdf Mainstream Science on Intelligence]. In: &#039;&#039;Intelligence&#039;&#039; 24 (1997), pp. 13–23.&lt;br /&gt;
:Jobst Landgrebe and Barry Smith: [https://arxiv.org/pdf/1906.05833.pdf There is no Artificial General Intelligence]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;The context-dependence of human intelligence, and why AGI is impossible&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Part 3. Affordances, tacit knowledge, cognitive niches, and the background of Artificial Intelligence&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Background&#039;&#039;&#039;: &lt;br /&gt;
:Harry Heft, &#039;&#039;[https://buffalo.box.com/shared/static/bbaq21q115pi8xpa5744ku1ftuuj6je0.pdf Ecological Psychology in Context]&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:[https://youtu.be/lS4-QSR1sNk?t=791 There&#039;s no &#039;I&#039; in &#039;AI&#039;], Steven Pemberton, Amsterdam, December 12, 2024 &lt;br /&gt;
::1. Esatz definitions: using words like &#039;thinks&#039; as in &#039;the machine is thinking&#039;, but with meanings quite different from those we use when talking about human beings. As when we define &#039;flying&#039; as moving through the air, and then jumping up and down and saying &amp;quot;look, I&#039;m flying!&amp;quot;&lt;br /&gt;
::2. Pareidolia: a psychological phenomenon that causes people to see patterns, objects, or meaning in ambiguous or unrelated stimuli&lt;br /&gt;
::3. If you can&#039;t spot irony, you&#039;re not intelligent&lt;br /&gt;
&lt;br /&gt;
==Monday October 13 NO CLASS: FALL BREAK==&lt;br /&gt;
&lt;br /&gt;
==Monday October 20 (4:00-5:50pm) The Free Will Problem and the Problem of the Machine Will==&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Lecture7-video Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Lecture7-slides Slides]&lt;br /&gt;
&lt;br /&gt;
Computers cannot have a will, because computers &#039;&#039;don&#039;t give a damn&#039;&#039;. Therefore there can be no machine ethics&lt;br /&gt;
&lt;br /&gt;
:The lack of the giving-a-damn-factor is taken by Yann LeCun as a reason to reject the idea that AI might pose an existential risk to humanity – an AI will have no desire for self-preservation “Almost half of CEOs fear A.I. could destroy humanity five to 10 years from now — but ‘A.I. godfather&#039; says an existential threat is ‘preposterously ridiculous’” &#039;&#039;Fortune&#039;&#039;, June 15, 2023. See also [https://www.nature.com/articles/s41562-023-01723-5 here].&lt;br /&gt;
&lt;br /&gt;
Implications of the absence of a machine will:&lt;br /&gt;
:The problem of the singularity (when machines will take over from humans) will not arise&lt;br /&gt;
:The idea of digital immortality will never be realized [https://buffalo.box.com/v/Digital-Immortality-2023 Slides]&lt;br /&gt;
:There can be no AI ethics (only: ethics governing human beings when they use AI)&lt;br /&gt;
&lt;br /&gt;
What is the basis of ethics as applied to humans?&lt;br /&gt;
&lt;br /&gt;
:[https://philpapers.org/rec/TALFAI-2 Raymond Tallis: &#039;&#039;Freedom: An Impossible Reality]&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Landgrebe-Ethics Slides]&lt;br /&gt;
:[https://youtu.be/EiBBS8ueyz4 Video]&lt;br /&gt;
&lt;br /&gt;
:Utilitarianism&lt;br /&gt;
:Value ethics&lt;br /&gt;
:Responsiblity&lt;br /&gt;
&lt;br /&gt;
No responsibility without objectifying intelligence&lt;br /&gt;
&lt;br /&gt;
On what basis should we build an AI ethics? &lt;br /&gt;
&lt;br /&gt;
On why AI ethics is (a) impossible, (b) unnecessary &lt;br /&gt;
&lt;br /&gt;
Readings: &lt;br /&gt;
:Moor: [https://philosophynow.org/issues/72/Four_Kinds_of_Ethical_Robots Four kinds of ethical robots]&lt;br /&gt;
:Jobst Landgrebe and Barry Smith: No AI Ethics &lt;br /&gt;
:Crane: [https://iai.tv/articles/the-ai-ethics-hoax-auid-1762?_auid=2020 The AI Ethics Hoax]&lt;br /&gt;
&lt;br /&gt;
==Monday October 27 (4:00-5:50pm) The Ontology of Consciousness==&lt;br /&gt;
&lt;br /&gt;
[https://buffalo.box.com/v/Lecture8-slides Slides]&lt;br /&gt;
[https://buffalo.box.com/v/Lecture8-video Video]&lt;br /&gt;
&lt;br /&gt;
:Learning outcomes&lt;br /&gt;
&lt;br /&gt;
:John Searle&lt;br /&gt;
::On consciousness: the Chinese Room Argument&lt;br /&gt;
::Searle and Smith&lt;br /&gt;
&lt;br /&gt;
:Neuroscience and consciousness&lt;br /&gt;
&lt;br /&gt;
:[https://www.ted.com/talks/anil_seth_being_you_a_new_science_of_consciousness Anil Seth, &#039;&#039;Being You: A New Science of Consciousness&#039;&#039;]&lt;br /&gt;
&lt;br /&gt;
:[https://link.springer.com/article/10.1007/s11229-019-02192-y Making AI meaningful again]&lt;br /&gt;
&lt;br /&gt;
:[https://philpapers.org/rec/TALWTM Raymond Tallis, &#039;&#039;Why the Mind is not a Computer&#039;&#039;]&lt;br /&gt;
&lt;br /&gt;
:[https://philpapers.org/rec/TALTEA Raymond Tallis: &#039;&#039;The Explicit Animal: A Defence of Human Consciousness&#039;&#039;]&lt;br /&gt;
&lt;br /&gt;
==Monday November 3 (4:00-5:50pm) Debates on ontology engineering: Part 1==&lt;br /&gt;
&lt;br /&gt;
Featuring [https://johnbeverley.com/ John Beverley]&lt;br /&gt;
&lt;br /&gt;
:[https://buffalo.box.com/v/Lecture9-video Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Lecture9-slides Slides]&lt;br /&gt;
:[https://buffalo.box.com/v/Transcription-Debate-1 Transcription]&lt;br /&gt;
&lt;br /&gt;
Debating the following motions: &lt;br /&gt;
:Philosophy is irrelevant to ontology engineering &lt;br /&gt;
::[https://buffalo.box.com/v/use-mention-confusion The use-mention confusion]&lt;br /&gt;
:Mappings merely give extra life to bad ontologies &lt;br /&gt;
:AI fear is justified&lt;br /&gt;
:BFO is too slow to react&lt;br /&gt;
:Knowledge graphs cannot prevent hallucinations&lt;br /&gt;
:There can never be AGI&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Background&#039;&#039;&#039;&lt;br /&gt;
:Strategies for leveraging ontologies and knowledge graphs to enhance the capabilities of Large Language Models and address their limitations.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;[https://www.linkedin.com/pulse/ontological-foundation-cornerstone-trustworthy-ai-shawn-riley-l3igc/ The Ontological Foundation: A Cornerstone for Trustworthy AI] with caveats added in &#039;&#039;&#039;bold face&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*Explainability: Ontologies make AI decision-making processes more transparent and interpretable. By providing a clear, logical structure of knowledge, they allow for tracing the reasoning behind &#039;&#039;&#039;some&#039;&#039;&#039; AI decisions.&lt;br /&gt;
&lt;br /&gt;
*Consistency: They &#039;&#039;&#039;help to foster&#039;&#039;&#039; logical consistency across AI systems, reducing errors and contradictions. This is particularly crucial in complex domains where maintaining coherence is challenging.&lt;br /&gt;
&lt;br /&gt;
*Interoperability: Ontologies &#039;&#039;&#039;help to foster&#039;&#039;&#039; seamless integration of knowledge from various sources and domains. This interoperability is essential for creating comprehensive AI systems that can reason across multiple areas of expertise.&lt;br /&gt;
&lt;br /&gt;
*Semantic Richness: Ontologies capture nuanced relationships and constraints that go beyond simple hierarchical structures, allowing for more sophisticated reasoning.&lt;br /&gt;
&lt;br /&gt;
*Domain Expertise Encoding: They provide a means to formally encode human expert knowledge, &#039;&#039;&#039;to some extent&#039;&#039;&#039; bridging the gap between human understanding and machine processing.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[https://buffalo.box.com/v/AI-and-the-Future An introduction to the statistical foundations of AI]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[https://buffalo.box.com/v/Statistical-Foundations-of-AI Video]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The types of AI&lt;br /&gt;
&lt;br /&gt;
:Deterministic AI&lt;br /&gt;
::Good old fashioned AI (GOFAI)&lt;br /&gt;
:Basic stochastic AI&lt;br /&gt;
::How regression works&lt;br /&gt;
:Advanced stochastic AI&lt;br /&gt;
::Neural networks and deep learning&lt;br /&gt;
:Hybrid&lt;br /&gt;
::Neurosymbolic AI&lt;br /&gt;
&lt;br /&gt;
:Background reading: &#039;&#039;Why machines will never rule the world&#039;&#039;, 1e chapter 8, 2e chapter 9&lt;br /&gt;
&lt;br /&gt;
==Monday November 10 (4:00-5:50pm) Debates on ontology engineering: Part 2==&lt;br /&gt;
&lt;br /&gt;
:[https://youtu.be/UzHTEMxgKEc Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Lecture10Slides Slides]&lt;br /&gt;
:[https://buffalo.box.com/v/Debate2-Transcription Transcription]&lt;br /&gt;
&lt;br /&gt;
:Will combining the semantically rich architectures provided by ontologies and knowledge graphs with the generative strengths of LLMs provide a path towards more explainable artificial intelligence systems, more trustworthy output, and a deeper understanding of vulnerabilities arising from integrated architectures?&lt;br /&gt;
:The idea of digital immortality is idiotic&lt;br /&gt;
:We should allow AI research to proceed unregulated&lt;br /&gt;
:Even if you think AGI is impossible, you should treat robots at certain levels of sophistication as moral agents&lt;br /&gt;
:&#039;OWL semantics&#039; have nothing to do with the semantics of ordinary language&lt;br /&gt;
:AI will take away our jobs&lt;br /&gt;
:There will never be driverless cars&lt;br /&gt;
:Science is not ready for software, let alone AI&lt;br /&gt;
&lt;br /&gt;
Outlines the current landscape of ontology-based AI enhancement strategies, highlighting what goes well and what goes poorly, and why ontology engineering is necessary.&lt;br /&gt;
&lt;br /&gt;
Background&lt;br /&gt;
:[https://quantumzeitgeist.com/haghighi-stanford-demonstrates-ontological-bias-in-chatgpt-image-generation-via-root-depiction/ Ontological Assumptions in AI Outputs]&lt;br /&gt;
&lt;br /&gt;
==November 10 is the deadline for submission to BS of starting drafts for your essays==&lt;br /&gt;
&lt;br /&gt;
PhD candidates: &lt;br /&gt;
:2 credit hours: 2000 words / starting draft: 1000 words&lt;br /&gt;
:3 credit hours: 2000 + 3000 words / 1000 + 1000 words&lt;br /&gt;
&lt;br /&gt;
Masters candidates:&lt;br /&gt;
:2 credit hours: 1500 words /starting draft: 750 words&lt;br /&gt;
:3 credit hours: 1500 + 2000 words / 750 + 750 words&lt;br /&gt;
&lt;br /&gt;
Undergraduate candidates&lt;br /&gt;
:2 credit hours: 1000 words / starting draft: 500 words&lt;br /&gt;
:3 credit hours: 1500 words / 500 + 500 words&lt;br /&gt;
&lt;br /&gt;
==Wednesday, November 19 (10:00 - 11:50am) On Hallucinations and Political Correctness ==&lt;br /&gt;
&lt;br /&gt;
This will be a lecture by [https://x.com/JobstLandgrebe?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Eauthor Jobst Landgrebe] on:&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;Why machines will never stop hallucinating&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
[https://buffalo.box.com/v/Lecture11-Slides Slides]&lt;br /&gt;
&lt;br /&gt;
[https://buffalo.box.com/v/Lecture11-Video Video]&lt;br /&gt;
&lt;br /&gt;
In current-day culture, concerns are raised when LLMs responds with symbol or pixel sequences which are seen as deviating from social norms of political correctness or wokeness -- or in other words, when they say the unsayable. Further problems are riased for LLM technology by the inconvenient fact of hallucinations, since this prevents their usage for task automation. LLM architects and engineers try to prevent both types of events. This talk shows why it is impossible to ensure that LLMs do not hallucinate or speak the unspeakable, drawing on arguments from the theory of computation (Turing decision/Rice theorem, Gödel&#039;s First Incompleteness Theorem).  &lt;br /&gt;
&lt;br /&gt;
Literature: &lt;br /&gt;
&lt;br /&gt;
[https://arxiv.org/abs/2307.10719 Glukhov et. al 2023], LLM Censorship: A Machine Learning Challenge or a Computer Security Problem?&lt;br /&gt;
&lt;br /&gt;
[https://arxiv.org/abs/2409.05746 Banerjee et al. 2024], LLMs Will Always Hallucinate, and We Need to Live With This&lt;br /&gt;
&lt;br /&gt;
[https://ml-site.cdn-apple.com/papers/the-illusion-of-thinking.pdf Apple, The Illusion of Thinking]&lt;br /&gt;
&lt;br /&gt;
==Monday November 24 (4:00-5:50pm) Landgrebe on the Replication Crisis. Jacko on the Ontological Foundations of Proxemics==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Jobst Landgrebe: Complex Systems and Cognitive Science: Why the Replication Problem is here to stay&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[https://buffalo.box.com/v/Lecture12-Landgrebe-Video Video]&lt;br /&gt;
&lt;br /&gt;
[https://buffalo.box.com/v/Lecture12-Landgrebe-Slides Slides]&lt;br /&gt;
&lt;br /&gt;
:The &#039;replication problem&#039; is the the inability of scientific communities to independently confirm the results of scientific work. Much has been written on this problem especially as it arises in (social) psychology, and on potential solutions under the heading of &#039;open science&#039;. But we will see that the replication problem has plagued medicine as a positive science since its beginnings (Virchov and Pasteur). This problem has become worse over the last 30 years and has massive consequences for healthcare practice and policy. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Jan Jacko: Ontological Foundations of Proxemics&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[https://buffalo.box.com/v/Lectuer12-Jacko-Video Video]&lt;br /&gt;
&lt;br /&gt;
[https://buffalo.box.com/v/Lecture12-Jacko-Slides Slides]&lt;br /&gt;
&lt;br /&gt;
:Proxemics is the study of spatial behaviour in interpersonal communication. It rests on a set of implicit and explicit assumptions about the nature of space, embodiment, intentionality, and meaning. This presentation aims to articulate these assumptions and outline a conceptual framework for understanding proxemics as an ontologically grounded discipline.&lt;br /&gt;
&lt;br /&gt;
--&lt;br /&gt;
&lt;br /&gt;
Background on the replication crisis:&lt;br /&gt;
&lt;br /&gt;
:[https://plato.stanford.edu/entries/scientific-reproducibility Reproducibility of Scientific Results], &#039;&#039;Stanford Encyclopedia of Philosophy&#039;&#039;, 2018&lt;br /&gt;
:[https://www.vox.com/future-perfect/21504366/science-replication-crisis-peer-review-statistics Science has been in a “replication crisis” for a decade]&lt;br /&gt;
:[https://www.youtube.com/watch?v=HhDGkbw1FdwThe Irreproducibility Crisis and the Lehman Crash], Barry Smith, Youtube 2020&lt;br /&gt;
:[https://buffalo.box.com/s/m3nu15lqjw0qhpqycz3wjsai057p9jf6 Slides]&lt;br /&gt;
:[https://buffalo.app.box.com/s/m3nu15lqjw0qhpqycz3wjsai057p9jf6 The replication problems which arise when AI applied in scientific research]&lt;br /&gt;
:[https://buffalo.box.com/v/Is-Psychology-Finished? Is Psychology Finished?]&lt;br /&gt;
:[https://plato.stanford.edu/entries/scientific-reproducibility Reproducibility of Scientific Results], &#039;&#039;Stanford Encyclopedia of Philosophy&#039;&#039;, 2018&lt;br /&gt;
:[https://www.vox.com/future-perfect/21504366/science-replication-crisis-peer-review-statistics Science has been in a “replication crisis” for a decade]&lt;br /&gt;
:[https://www.youtube.com/watch?v=HhDGkbw1FdwThe Irreproducibility Crisis and the Lehman Crash], Barry Smith, Youtube 2020&lt;br /&gt;
:[https://x.com/cremieuxrecueil/status/1983994242272993592 Bayer tested some findings and only achieved a 21% replication rate for biomedical studies]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;[https://www.linkedin.com/pulse/ontological-foundation-cornerstone-trustworthy-ai-shawn-riley-l3igc/ The Ontological Foundation: A Cornerstone for Trustworthy AI]&#039;&#039;, October 2024, with caveats added in &#039;&#039;&#039;bold face&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
*Explainability: Ontologies make AI decision-making processes more transparent and interpretable. By providing a clear, logical structure of knowledge, they allow for tracing the reasoning behind &#039;&#039;&#039;some&#039;&#039;&#039; AI decisions.&lt;br /&gt;
&lt;br /&gt;
*Consistency: They &#039;&#039;&#039;help to foster&#039;&#039;&#039; logical consistency across AI systems, reducing errors and contradictions. This is particularly crucial in complex domains where maintaining coherence is challenging.&lt;br /&gt;
&lt;br /&gt;
*Interoperability: Ontologies &#039;&#039;&#039;help to foster&#039;&#039;&#039; seamless integration of knowledge from various sources and domains. This interoperability is essential for creating comprehensive AI systems that can reason across multiple areas of expertise.&lt;br /&gt;
&lt;br /&gt;
*Semantic Richness: Ontologies capture nuanced relationships and constraints that go beyond simple hierarchical structures, allowing for more sophisticated reasoning.&lt;br /&gt;
&lt;br /&gt;
*Domain Expertise Encoding: They provide a means to formally encode human expert knowledge, &#039;&#039;&#039;to some extent&#039;&#039;&#039; bridging the gap between human understanding and machine processing.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Complex Systems and Cognitive Science: Why the Replication Problem is here to stay&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:The &#039;replication problem&#039; is the the inability of scientific communities to independently confirm the results of scientific work. Much has been written on this problem especially as it arises in (social) psychology, and on potential solutions under the heading of &#039;open science&#039;. But we will see that the replication problem has plagued medicine as a positive science since its beginnings (Virchov and Pasteur). This problem has become worse over the last 30 years and has massive consequences for healthcare practice and policy. &lt;br /&gt;
&lt;br /&gt;
[https://buffalo.box.com/s/m3nu15lqjw0qhpqycz3wjsai057p9jf6 Slides]&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
:[https://plato.stanford.edu/entries/scientific-reproducibility Reproducibility of Scientific Results], &#039;&#039;Stanford Encyclopedia of Philosophy&#039;&#039;, 2018&lt;br /&gt;
:[https://www.vox.com/future-perfect/21504366/science-replication-crisis-peer-review-statistics Science has been in a “replication crisis” for a decade]&lt;br /&gt;
:[https://www.youtube.com/watch?v=HhDGkbw1FdwThe Irreproducibility Crisis and the Lehman Crash], Barry Smith, Youtube 2020&lt;br /&gt;
&lt;br /&gt;
==Monday December 1 (4:00-5:50pm) Landgrebe on machine intelligence. Jacko on psychopathic AI==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Jobst Landgrebe: Why we cannot create intelligence inside a machine&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Timothy W. Coleman: Beyond the Limits of AI: Ontology as a Framework for Good System Design (Student presentation)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Michael Behun III: The Paradox within Artificial Intelligence Development&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Jan Jacko: Are intelligent machines psychopathic by design?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:There are two major paradigms in clinical psychology. The first treats mental and personality disorders as disturbances of an inner life: of subjective experience, affect, and self-awareness. This view cannot be meaningfully applied to artificial systems, for which no such subjectivity is given. The second paradigm is behavioural and functional. Here disorders, especially personality disorders, are defined as stable, recurrent patterns of behaviour, cognition, and interpersonal functioning that deviate from expected norms and impair adaptation. Psychopathy in this framework is a cluster of observable traits: persistent violation of social rules, instrumental treatment of others, chronically shallow or incongruent emotional expression, irresponsibility, and a striking absence of anxiety or inhibition in situations that normally elicit it. In this talk I adopt the second, behavioural paradigm and extend it to artificial systems, introducingthe notion of &#039;&#039;&#039;AI quasi-personality&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
==Monday December 8 (4:00-5:50pm) Oral presentations (Compulsory for all students)==&lt;br /&gt;
&lt;br /&gt;
4:00  John Davis: Symbiotic Surveillance and Artificial Intelligence&lt;br /&gt;
 &lt;br /&gt;
4:15 Rachel Mavrovich&lt;br /&gt;
&lt;br /&gt;
4:30  Cristian Keroles: Scientific Realism, Paradigm Shifts, and the Feasibility of AGI&lt;br /&gt;
&lt;br /&gt;
4:45  Mike Behun Jr.: Examining the Role of Formal Ontology and Hybrid AI in Achieving Trustworthy Results, Based&lt;br /&gt;
on Domain Experts for High Stakes Systems.&lt;br /&gt;
&lt;br /&gt;
5:00  Ore Afe:&lt;br /&gt;
&lt;br /&gt;
5:15  Gregory DeFranco: Will Algorithms Control Us?&lt;br /&gt;
&lt;br /&gt;
5:30  Claire Allen: Video Games and the Virtual World&lt;br /&gt;
&lt;br /&gt;
5:45  John Hogan: Artificial Unintelligence&lt;br /&gt;
&lt;br /&gt;
==Background Material==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to AI for Philosophers&#039;&#039;&#039;&lt;br /&gt;
:[https://www.youtube.com/watch?v=cmiY8_XVvzs Why not robot cops? Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Why-not-robot-cops Why not robot cops? Slides] &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;An Introduction to Philosophy for Computer Scientists&#039;&#039;&#039;&lt;br /&gt;
:[https://buffalo.box.com/v/What-is-philosophy Video]&lt;br /&gt;
:[https://buffalo.box.com/v/Crash-Course-Introduction Slides]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[https://www.cp.eng.chula.ac.th/~prabhas/teaching/cbs-it-seminar/2012/aiphil-mccarthy.pdf John McCarthy, &amp;quot;What has AI in common with philosophy?&amp;quot;]&lt;br /&gt;
&lt;br /&gt;
[https://cosmosandtaxis.org/ct-1256/ Companion volume to &#039;&#039;Why Machines Will Never Rule the World&#039;&#039;]&lt;br /&gt;
&lt;br /&gt;
[https://ncorwiki.buffalo.edu/index.php/Interviews_and_podcasts_on_%27%27Why_Machines_Will_Never_Rule_the_World%27%27 Podcasts and interviews on &#039;&#039;Why Machines Will Never Rule the World&#039;&#039;]&lt;br /&gt;
&lt;br /&gt;
==Student Learning Outcomes==&lt;br /&gt;
&lt;br /&gt;
1. Comprehend the Architecture and Operation of Large Language Models: Explain the basic design and functioning of Large Language Models (LLMs) such as ChatGPT. Define and use correctly key terms&lt;br /&gt;
&lt;br /&gt;
2. Evaluate the Theoretical and Practical Limits of AI: Explain the limitations of AI systems as applications of Turing-computable mathematics. Critically assess claims about Artificial General Intelligence (AGI) and the “singularity.”&lt;br /&gt;
&lt;br /&gt;
3. Examine Theories of Machine Consciousness, Transhumanism, and Simulation: Explain why machines lack intentionality and subjective experience.&lt;br /&gt;
&lt;br /&gt;
4. Understand Ethical and Normative Dimensions of AI: Explain why AI systems cannot possess will, intention, or moral responsibility, and differentiate between AI ethics and ethics of AI use.&lt;br /&gt;
&lt;br /&gt;
5. Apply Ontology-Based Strategies for AI Enhancement: Explain how ontologies and knowledge graphs can improve the explainability, consistency, and interoperability of AI systems. Identify strengths and weaknesses of ontology-based and neurosymbolic AI approaches.&lt;/div&gt;</summary>
		<author><name>Phismith</name></author>
	</entry>
	<entry>
		<id>https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75521</id>
		<title>Philosophy and Artificial Intelligence 2026</title>
		<link rel="alternate" type="text/html" href="https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75521"/>
		<updated>2026-01-20T15:16:00Z</updated>

		<summary type="html">&lt;p&gt;Phismith: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&#039;&#039;&#039;Philosophy and Artificial Intelligence 2026&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe and Barry Smith&lt;br /&gt;
&lt;br /&gt;
MAP, USI, Lugano, Spring 2026&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;FIRST DRAFT VERSION&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Venue: Room A23&lt;br /&gt;
Monday, May 4 (13:30-16:15) Introduction&lt;br /&gt;
Tuesday May 5 (09:30-12:15) Limits and Dangers of AI?&lt;br /&gt;
Wednesday, May 6 (09:30 - 12:15) Transhumanism and digital immortality&lt;br /&gt;
Thursday, May 7 20 (09:30 - 12:15) Can a machine be conscious?&lt;br /&gt;
Friday, May 8 (13:30 - 16:15) Are we living in a simulation&lt;br /&gt;
Tuesday, May 12 (09:30 - 12:15) An introduction to the statistical foundations of AI&lt;br /&gt;
Wednesday May 13 (09:30 - 12:15) Explicit, implicit, practical, personal and tacit knowledge&lt;br /&gt;
Friday May 15 (09:30-12:15) Are We Living in a Simulation?&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Introduction&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Artificial Intelligence (AI) is the subfield of Computer Science devoted to developing programs that enable computers to display behavior that can (broadly) be characterized as intelligent. On the strong version, the ultimate goal of AI is to create what is called General Artificial Intelligence (AGI), by which is meant an artificial system that is as intelligent as a human being.&lt;br /&gt;
&lt;br /&gt;
Since its inception in the middle of the last century AI has enjoyed repeated cycles of enthusiasm and disappointment (AI summers and winters). Recent successes of ChatGPT and other Large Language Models (LLMs) have opened a new era of popularization of AI. For the first time, AI tools have been created which are immediately available to the wider population, who for the first time can have real hands-on experience of what AI can do.&lt;br /&gt;
&lt;br /&gt;
These developments in AI open up a series of questions such as:&lt;br /&gt;
&lt;br /&gt;
:Will the powers of AI continue to grow in the future, and if so will they ever reach the point where they can be said to have intelligence equivalent to or greater than that of a human being?&lt;br /&gt;
:Could we ever reach the point where we can accept the thesis that an AI system could have something like consciousness or sentience?&lt;br /&gt;
:Could we reach the point where an AI system could be said to behave ethically, or to have responsibility for its actions?&lt;br /&gt;
:Can quantum computers enable a stronger AI than what we have today?&lt;br /&gt;
:Can a computer have desires, a will, and emotions?&lt;br /&gt;
:Can a computer have responsibility for its behavior?&lt;br /&gt;
:Could a machine have something like a personal identity? Would I really survive if the contents of my brain were uploaded to the cloud?&lt;br /&gt;
&lt;br /&gt;
We will describe in detail how stochastic AI works, and consider these and a series of other questions at the borderlines of philosophy and AI. The class will close with presentations of papers on relevant topics given by students.&lt;br /&gt;
&lt;br /&gt;
Some of the material for this class is derived from our book&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Why Machines Will Never Rule the World: Artificial Intelligence without Fear&#039;&#039; (2nd edition, Routledge 2025).&lt;br /&gt;
and from the companion volume:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Symposium on Why Machines Will Never Rule the World&#039;&#039; — Guest editor, Janna Hastings, University of Zurich&lt;br /&gt;
which appeared as a special issue of the public access journal Cosmos + Taxis in early 2024.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Faculty&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe is the founder and CEO of Cognotekt, GmBH, an AI company based in Cologne specialised in the design and implementation of holistic AI solutions. He has 20 years experience in the AI field, 8 years as a management consultant and software architect. He has also worked as a physician and mathematician, and he views AI itself -- to the extent that it is not an elaborate hype -- as a branch of applied mathematics. CUrrently his primary focus is in the biomathematics of cancer.&lt;br /&gt;
&lt;br /&gt;
Barry Smith is one of the world&#039;s most widely cited philosophers. He has contributed primarily to the field of applied ontology, which means applying philosophical ideas derived from analytical metaphysics to the concrete practical problems which arise where attempts are made to compare or combine heterogeneous bodies of data.&lt;br /&gt;
&lt;br /&gt;
Grading&lt;br /&gt;
&lt;br /&gt;
Essay with presentation: 70%&lt;br /&gt;
Essay with no presentation: 85%&lt;br /&gt;
Presentation: 15%&lt;br /&gt;
Class Participation 15%&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Draft Schedule&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
:1.	Monday, May 4 (13:30-16:15) Introduction&lt;br /&gt;
:2.	Tuesday May 5 (09:30-12:15) Limits and Dangers of AI?&lt;br /&gt;
:3.	Wednesday, May 6 (09:30 - 12:15) Transhumanism and digital immortality&lt;br /&gt;
:4.	Thursday, May 7 20 (09:30 - 12:15) Can a machine be conscious?&lt;br /&gt;
:5.	Friday, May 8 (13:30 - 16:15) Are we living in a simulation&lt;br /&gt;
:6.	Tuesday, May 12 (09:30 - 12:15) An introduction to the statistical foundations of AI&lt;br /&gt;
:7.	Wednesday May 13 (09:30 - 12:15) Explicit, implicit, practical, personal and tacit knowledge&lt;br /&gt;
:8.	Friday May 15 (09:30-12:15) Are We Living in a Simulation?&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Program&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==1. Monday, May 4 (13:30-16:15) Introduction==&lt;br /&gt;
Part 1: An introduction to the course: Impact of philosophy on AI; impact of AI on philosophy&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
Video&lt;br /&gt;
Why Machines Will Never Rule the World&lt;br /&gt;
Part 2: What are the essential marks of human intelligence?&lt;br /&gt;
&lt;br /&gt;
The classical psychological definitions of intelligence are:  &lt;br /&gt;
&lt;br /&gt;
A. the ability to adapt to new situations (applies both to humans and to animals) &lt;br /&gt;
B. a very general mental capability (possessed only by humans) that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience &lt;br /&gt;
Can a machine be intelligent in either of these senses?&lt;br /&gt;
&lt;br /&gt;
Slides on IQ tests&lt;br /&gt;
Readings:&lt;br /&gt;
&lt;br /&gt;
Linda S. Gottfredson. Mainstream Science on Intelligence. In: Intelligence 24 (1997), pp. 13–23.&lt;br /&gt;
Jobst Landgrebe and Barry Smith: There is no Artificial General Intelligence&lt;br /&gt;
Jobst Landgrebe: Deep reasoning, abstraction and planning&lt;br /&gt;
Background: Ersatz Definitions, Anthropomorphisms, and Pareidolia&lt;br /&gt;
&lt;br /&gt;
There&#039;s no &#039;I&#039; in &#039;AI&#039;, Steven Pemberton, Amsterdam, December 12, 2024&lt;br /&gt;
1. Esatz definitions: using words like &#039;thinks&#039; as in &#039;the machine is thinking&#039;, but with meanings quite different from those we use when talking about human beings. As when we define &#039;flying&#039; as moving through the air, and then jumping up and down and saying &amp;quot;look, I&#039;m flying!&amp;quot;&lt;br /&gt;
2. Pareidolia: a psychological phenomenon that causes people to see patterns, objects, or meaning in ambiguous or unrelated stimuli&lt;br /&gt;
3. If you can&#039;t spot irony, you&#039;re not intelligent&lt;br /&gt;
Tuesday February 18 (09:30-12:15) Limits and Dangers of AI?&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the technical fundamentals of AI: Methods, mathematics, usage as well&lt;br /&gt;
&lt;br /&gt;
2. Outlines the theory of complex systems documented in our book&lt;br /&gt;
&lt;br /&gt;
3. Shows why AI cannot model complex systems adequately and synoptically, and why they therefore cannot reach a level of intelligence equal to that of human beings.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Will AI Destroy Humanity? A Soho Forum Debate (Spoiler: Jobst won)&lt;br /&gt;
&lt;br /&gt;
R.V. Yampolskii, AI: Unexplainable, Unpredictable, Uncontrollable&lt;br /&gt;
&lt;br /&gt;
Arvind Narayanan and Sayash Kapoor, AI Snake Oil&lt;br /&gt;
&lt;br /&gt;
Arnold Schelsky, The Hype Book, especially Chapter 1.&lt;br /&gt;
&lt;br /&gt;
==2	Tuesday May 5 (09:30-12:15) Limits and Dangers of AI? ==&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the full spectrum of transhumanism and its cultural origins.&lt;br /&gt;
&lt;br /&gt;
2. Debunk the feasibility of radically improving human beings via technology.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
TESCREALISM, or: why AI gods are so passionate about creating Artificial General Intelligence&lt;br /&gt;
Considering the existential risk of Artificial Superintelligence&lt;br /&gt;
Thursday, February 20 (9:30 - 12:15) Can a machine be conscious?&lt;br /&gt;
The machine will&lt;br /&gt;
&lt;br /&gt;
Computers cannot have a will, because computers don&#039;t give a damn. Therefore there can be no machine ethics&lt;br /&gt;
&lt;br /&gt;
The lack of the giving-a-damn-factor is taken by Yann LeCun as a reason to reject the idea that AI might pose an existential risk to humanity – an AI will have no desire for self-preservation “Almost half of CEOs fear A.I. could destroy humanity five to 10 years from now — but ‘A.I. godfather&#039; says an existential threat is ‘preposterously ridiculous’” Fortune, June 15, 2023. See also here.&lt;br /&gt;
Implications of the absence of a machine will:&lt;br /&gt;
&lt;br /&gt;
The problem of the singularity (when machines will take over from humans) will not arise&lt;br /&gt;
The idea of digital immortality will never be realized Slides&lt;br /&gt;
The idea that human beings are simulations can be rejected&lt;br /&gt;
There can be no AI ethics (only: ethics governing human beings when they use AI)&lt;br /&gt;
Fermi&#039;s paradox is solved&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Searle&#039;s Chinese Room Argument&lt;br /&gt;
Machines cannot have intentionality; they cannot have experiences which are about something.&lt;br /&gt;
&lt;br /&gt;
Searle: Minds, Brains, and Programs&lt;br /&gt;
&lt;br /&gt;
==3.	Wednesday, May 6 (09:30 - 12:15) Transhumanism and digital immortality Are we living in a simulation?==&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
The Fermi Paradox&lt;br /&gt;
&lt;br /&gt;
Bostrom&#039;s Simulation Argument&lt;br /&gt;
&lt;br /&gt;
Background&lt;br /&gt;
&lt;br /&gt;
David Chalmers, Reality+&lt;br /&gt;
&lt;br /&gt;
Dialog with Chalmers avatar&lt;br /&gt;
&lt;br /&gt;
==4. Thursday, May 7 (09:30 - 12:15) Can a machine be conscious? An introduction to the statistical foundations of AI==&lt;br /&gt;
An introduction to the statistical foundations of AI&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
The types of AI&lt;br /&gt;
&lt;br /&gt;
Deterministic AI&lt;br /&gt;
Good old fashioned AI (GOFAI)&lt;br /&gt;
Basic stochastic AI&lt;br /&gt;
How regression works&lt;br /&gt;
Advanced stochastic AI&lt;br /&gt;
Neural networks and deep learning&lt;br /&gt;
Hybrid&lt;br /&gt;
Neurosymbolic AI&lt;br /&gt;
Background&lt;br /&gt;
Why machines will never rule the world, chapter 7 (chapter 8 of 2nd edition)&lt;br /&gt;
&lt;br /&gt;
==5.	Friday, May 8 (13:30 - 16:15) Are we living in a simulation==&lt;br /&gt;
Personal knowledge&lt;br /&gt;
&lt;br /&gt;
Explicit, implicit, practical, personal and tacit knowledge&lt;br /&gt;
Video&lt;br /&gt;
Knowing how vs Knowing that&lt;br /&gt;
Personal knowledge and science&lt;br /&gt;
Creativity&lt;br /&gt;
Empathy&lt;br /&gt;
Entrepreneurship&lt;br /&gt;
Leadership and control (and ruling the world)&lt;br /&gt;
Complex Systems and Cognitive Science: Why the Replication Problem is here to stay&lt;br /&gt;
&lt;br /&gt;
The &#039;replication problem&#039; is the the inability of scientific communities to independently confirm the results of scientific work. Much has been written on this problem especially as it arises in (social) psychology, and on potential solutions under the heading of &#039;open science&#039;. But we will see that the replication problem has plagued medicine as a positive science since its beginnings (Virchov and Pasteur). This problem has become worse over the last 30 years and has massive consequences for healthcare practice and policy.&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
==Friday May 2 (13:30-16:30) Are We Living in a Simulation?==&lt;br /&gt;
Are we living in a simulation?, Slides&lt;br /&gt;
Video&lt;br /&gt;
The Future of Artificial Intelligence, Slides&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Background Material&lt;br /&gt;
An Introduction to AI for Philosophers&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
Slides&lt;br /&gt;
(AI experts are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
An Introduction to Philosophy for Computer Scientists&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
Slides&lt;br /&gt;
(Philosophers are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
John McCarthy, &amp;quot;What has AI in common with philosophy?&amp;quot;&lt;/div&gt;</summary>
		<author><name>Phismith</name></author>
	</entry>
	<entry>
		<id>https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75520</id>
		<title>Philosophy and Artificial Intelligence 2026</title>
		<link rel="alternate" type="text/html" href="https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75520"/>
		<updated>2026-01-20T14:48:14Z</updated>

		<summary type="html">&lt;p&gt;Phismith: /* Thursday, May 7 (09:30 - 12:15) Can a machine be conscious? An introduction to the statistical foundations of AI */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&#039;&#039;&#039;Philosophy and Artificial Intelligence 2026&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe and Barry Smith&lt;br /&gt;
&lt;br /&gt;
MAP, USI, Lugano, Spring 2026&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;FIRST DRAFT VERSION&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Introduction&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Artificial Intelligence (AI) is the subfield of Computer Science devoted to developing programs that enable computers to display behavior that can (broadly) be characterized as intelligent. On the strong version, the ultimate goal of AI is to create what is called General Artificial Intelligence (AGI), by which is meant an artificial system that is as intelligent as a human being.&lt;br /&gt;
&lt;br /&gt;
Since its inception in the middle of the last century AI has enjoyed repeated cycles of enthusiasm and disappointment (AI summers and winters). Recent successes of ChatGPT and other Large Language Models (LLMs) have opened a new era of popularization of AI. For the first time, AI tools have been created which are immediately available to the wider population, who for the first time can have real hands-on experience of what AI can do.&lt;br /&gt;
&lt;br /&gt;
These developments in AI open up a series of questions such as:&lt;br /&gt;
&lt;br /&gt;
:Will the powers of AI continue to grow in the future, and if so will they ever reach the point where they can be said to have intelligence equivalent to or greater than that of a human being?&lt;br /&gt;
:Could we ever reach the point where we can accept the thesis that an AI system could have something like consciousness or sentience?&lt;br /&gt;
:Could we reach the point where an AI system could be said to behave ethically, or to have responsibility for its actions?&lt;br /&gt;
:Can quantum computers enable a stronger AI than what we have today?&lt;br /&gt;
:Can a computer have desires, a will, and emotions?&lt;br /&gt;
:Can a computer have responsibility for its behavior?&lt;br /&gt;
:Could a machine have something like a personal identity? Would I really survive if the contents of my brain were uploaded to the cloud?&lt;br /&gt;
&lt;br /&gt;
We will describe in detail how stochastic AI works, and consider these and a series of other questions at the borderlines of philosophy and AI. The class will close with presentations of papers on relevant topics given by students.&lt;br /&gt;
&lt;br /&gt;
Some of the material for this class is derived from our book&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Why Machines Will Never Rule the World: Artificial Intelligence without Fear&#039;&#039; (2nd edition, Routledge 2025).&lt;br /&gt;
and from the companion volume:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Symposium on Why Machines Will Never Rule the World&#039;&#039; — Guest editor, Janna Hastings, University of Zurich&lt;br /&gt;
which appeared as a special issue of the public access journal Cosmos + Taxis in early 2024.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Faculty&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe is the founder and CEO of Cognotekt, GmBH, an AI company based in Cologne specialised in the design and implementation of holistic AI solutions. He has 20 years experience in the AI field, 8 years as a management consultant and software architect. He has also worked as a physician and mathematician, and he views AI itself -- to the extent that it is not an elaborate hype -- as a branch of applied mathematics. CUrrently his primary focus is in the biomathematics of cancer.&lt;br /&gt;
&lt;br /&gt;
Barry Smith is one of the world&#039;s most widely cited philosophers. He has contributed primarily to the field of applied ontology, which means applying philosophical ideas derived from analytical metaphysics to the concrete practical problems which arise where attempts are made to compare or combine heterogeneous bodies of data.&lt;br /&gt;
&lt;br /&gt;
Grading&lt;br /&gt;
&lt;br /&gt;
Essay with presentation: 70%&lt;br /&gt;
Essay with no presentation: 85%&lt;br /&gt;
Presentation: 15%&lt;br /&gt;
Class Participation 15%&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Draft Schedule&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
:1.	Monday, May 4 (13:30-16:15) Introduction&lt;br /&gt;
:2.	Tuesday May 5 (09:30-12:15) Limits and Dangers of AI?&lt;br /&gt;
:3.	Wednesday, May 6 (09:30 - 12:15) Transhumanism and digital immortality&lt;br /&gt;
:4.	Thursday, May 7 20 (09:30 - 12:15) Can a machine be conscious?&lt;br /&gt;
:5.	Friday, May 8 (13:30 - 16:15) Are we living in a simulation&lt;br /&gt;
:6.	Tuesday, May 12 (09:30 - 12:15) An introduction to the statistical foundations of AI&lt;br /&gt;
:7.	Wednesday May 13 (09:30 - 12:15) Explicit, implicit, practical, personal and tacit knowledge&lt;br /&gt;
:8.	Friday May 15 (09:30-12:15) Are We Living in a Simulation?&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Program&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==1. Monday, May 4 (13:30-16:15) Introduction==&lt;br /&gt;
Part 1: An introduction to the course: Impact of philosophy on AI; impact of AI on philosophy&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
Video&lt;br /&gt;
Why Machines Will Never Rule the World&lt;br /&gt;
Part 2: What are the essential marks of human intelligence?&lt;br /&gt;
&lt;br /&gt;
The classical psychological definitions of intelligence are:  &lt;br /&gt;
&lt;br /&gt;
A. the ability to adapt to new situations (applies both to humans and to animals) &lt;br /&gt;
B. a very general mental capability (possessed only by humans) that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience &lt;br /&gt;
Can a machine be intelligent in either of these senses?&lt;br /&gt;
&lt;br /&gt;
Slides on IQ tests&lt;br /&gt;
Readings:&lt;br /&gt;
&lt;br /&gt;
Linda S. Gottfredson. Mainstream Science on Intelligence. In: Intelligence 24 (1997), pp. 13–23.&lt;br /&gt;
Jobst Landgrebe and Barry Smith: There is no Artificial General Intelligence&lt;br /&gt;
Jobst Landgrebe: Deep reasoning, abstraction and planning&lt;br /&gt;
Background: Ersatz Definitions, Anthropomorphisms, and Pareidolia&lt;br /&gt;
&lt;br /&gt;
There&#039;s no &#039;I&#039; in &#039;AI&#039;, Steven Pemberton, Amsterdam, December 12, 2024&lt;br /&gt;
1. Esatz definitions: using words like &#039;thinks&#039; as in &#039;the machine is thinking&#039;, but with meanings quite different from those we use when talking about human beings. As when we define &#039;flying&#039; as moving through the air, and then jumping up and down and saying &amp;quot;look, I&#039;m flying!&amp;quot;&lt;br /&gt;
2. Pareidolia: a psychological phenomenon that causes people to see patterns, objects, or meaning in ambiguous or unrelated stimuli&lt;br /&gt;
3. If you can&#039;t spot irony, you&#039;re not intelligent&lt;br /&gt;
Tuesday February 18 (09:30-12:15) Limits and Dangers of AI?&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the technical fundamentals of AI: Methods, mathematics, usage as well&lt;br /&gt;
&lt;br /&gt;
2. Outlines the theory of complex systems documented in our book&lt;br /&gt;
&lt;br /&gt;
3. Shows why AI cannot model complex systems adequately and synoptically, and why they therefore cannot reach a level of intelligence equal to that of human beings.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Will AI Destroy Humanity? A Soho Forum Debate (Spoiler: Jobst won)&lt;br /&gt;
&lt;br /&gt;
R.V. Yampolskii, AI: Unexplainable, Unpredictable, Uncontrollable&lt;br /&gt;
&lt;br /&gt;
Arvind Narayanan and Sayash Kapoor, AI Snake Oil&lt;br /&gt;
&lt;br /&gt;
Arnold Schelsky, The Hype Book, especially Chapter 1.&lt;br /&gt;
&lt;br /&gt;
==2	Tuesday May 5 (09:30-12:15) Limits and Dangers of AI? ==&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the full spectrum of transhumanism and its cultural origins.&lt;br /&gt;
&lt;br /&gt;
2. Debunk the feasibility of radically improving human beings via technology.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
TESCREALISM, or: why AI gods are so passionate about creating Artificial General Intelligence&lt;br /&gt;
Considering the existential risk of Artificial Superintelligence&lt;br /&gt;
Thursday, February 20 (9:30 - 12:15) Can a machine be conscious?&lt;br /&gt;
The machine will&lt;br /&gt;
&lt;br /&gt;
Computers cannot have a will, because computers don&#039;t give a damn. Therefore there can be no machine ethics&lt;br /&gt;
&lt;br /&gt;
The lack of the giving-a-damn-factor is taken by Yann LeCun as a reason to reject the idea that AI might pose an existential risk to humanity – an AI will have no desire for self-preservation “Almost half of CEOs fear A.I. could destroy humanity five to 10 years from now — but ‘A.I. godfather&#039; says an existential threat is ‘preposterously ridiculous’” Fortune, June 15, 2023. See also here.&lt;br /&gt;
Implications of the absence of a machine will:&lt;br /&gt;
&lt;br /&gt;
The problem of the singularity (when machines will take over from humans) will not arise&lt;br /&gt;
The idea of digital immortality will never be realized Slides&lt;br /&gt;
The idea that human beings are simulations can be rejected&lt;br /&gt;
There can be no AI ethics (only: ethics governing human beings when they use AI)&lt;br /&gt;
Fermi&#039;s paradox is solved&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Searle&#039;s Chinese Room Argument&lt;br /&gt;
Machines cannot have intentionality; they cannot have experiences which are about something.&lt;br /&gt;
&lt;br /&gt;
Searle: Minds, Brains, and Programs&lt;br /&gt;
&lt;br /&gt;
==3.	Wednesday, May 6 (09:30 - 12:15) Transhumanism and digital immortality Are we living in a simulation?==&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
The Fermi Paradox&lt;br /&gt;
&lt;br /&gt;
Bostrom&#039;s Simulation Argument&lt;br /&gt;
&lt;br /&gt;
Background&lt;br /&gt;
&lt;br /&gt;
David Chalmers, Reality+&lt;br /&gt;
&lt;br /&gt;
Dialog with Chalmers avatar&lt;br /&gt;
&lt;br /&gt;
==4. Thursday, May 7 (09:30 - 12:15) Can a machine be conscious? An introduction to the statistical foundations of AI==&lt;br /&gt;
An introduction to the statistical foundations of AI&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
The types of AI&lt;br /&gt;
&lt;br /&gt;
Deterministic AI&lt;br /&gt;
Good old fashioned AI (GOFAI)&lt;br /&gt;
Basic stochastic AI&lt;br /&gt;
How regression works&lt;br /&gt;
Advanced stochastic AI&lt;br /&gt;
Neural networks and deep learning&lt;br /&gt;
Hybrid&lt;br /&gt;
Neurosymbolic AI&lt;br /&gt;
Background&lt;br /&gt;
Why machines will never rule the world, chapter 7 (chapter 8 of 2nd edition)&lt;br /&gt;
&lt;br /&gt;
==5.	Friday, May 8 (13:30 - 16:15) Are we living in a simulation==&lt;br /&gt;
Personal knowledge&lt;br /&gt;
&lt;br /&gt;
Explicit, implicit, practical, personal and tacit knowledge&lt;br /&gt;
Video&lt;br /&gt;
Knowing how vs Knowing that&lt;br /&gt;
Personal knowledge and science&lt;br /&gt;
Creativity&lt;br /&gt;
Empathy&lt;br /&gt;
Entrepreneurship&lt;br /&gt;
Leadership and control (and ruling the world)&lt;br /&gt;
Complex Systems and Cognitive Science: Why the Replication Problem is here to stay&lt;br /&gt;
&lt;br /&gt;
The &#039;replication problem&#039; is the the inability of scientific communities to independently confirm the results of scientific work. Much has been written on this problem especially as it arises in (social) psychology, and on potential solutions under the heading of &#039;open science&#039;. But we will see that the replication problem has plagued medicine as a positive science since its beginnings (Virchov and Pasteur). This problem has become worse over the last 30 years and has massive consequences for healthcare practice and policy.&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
==Friday May 2 (13:30-16:30) Are We Living in a Simulation?==&lt;br /&gt;
Are we living in a simulation?, Slides&lt;br /&gt;
Video&lt;br /&gt;
The Future of Artificial Intelligence, Slides&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Background Material&lt;br /&gt;
An Introduction to AI for Philosophers&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
Slides&lt;br /&gt;
(AI experts are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
An Introduction to Philosophy for Computer Scientists&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
Slides&lt;br /&gt;
(Philosophers are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
John McCarthy, &amp;quot;What has AI in common with philosophy?&amp;quot;&lt;/div&gt;</summary>
		<author><name>Phismith</name></author>
	</entry>
	<entry>
		<id>https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75519</id>
		<title>Philosophy and Artificial Intelligence 2026</title>
		<link rel="alternate" type="text/html" href="https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75519"/>
		<updated>2026-01-20T14:47:43Z</updated>

		<summary type="html">&lt;p&gt;Phismith: /* Thursday, May 7 20 (09:30 - 12:15) Can a machine be conscious? An introduction to the statistical foundations of AI */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&#039;&#039;&#039;Philosophy and Artificial Intelligence 2026&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe and Barry Smith&lt;br /&gt;
&lt;br /&gt;
MAP, USI, Lugano, Spring 2026&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;FIRST DRAFT VERSION&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Introduction&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Artificial Intelligence (AI) is the subfield of Computer Science devoted to developing programs that enable computers to display behavior that can (broadly) be characterized as intelligent. On the strong version, the ultimate goal of AI is to create what is called General Artificial Intelligence (AGI), by which is meant an artificial system that is as intelligent as a human being.&lt;br /&gt;
&lt;br /&gt;
Since its inception in the middle of the last century AI has enjoyed repeated cycles of enthusiasm and disappointment (AI summers and winters). Recent successes of ChatGPT and other Large Language Models (LLMs) have opened a new era of popularization of AI. For the first time, AI tools have been created which are immediately available to the wider population, who for the first time can have real hands-on experience of what AI can do.&lt;br /&gt;
&lt;br /&gt;
These developments in AI open up a series of questions such as:&lt;br /&gt;
&lt;br /&gt;
:Will the powers of AI continue to grow in the future, and if so will they ever reach the point where they can be said to have intelligence equivalent to or greater than that of a human being?&lt;br /&gt;
:Could we ever reach the point where we can accept the thesis that an AI system could have something like consciousness or sentience?&lt;br /&gt;
:Could we reach the point where an AI system could be said to behave ethically, or to have responsibility for its actions?&lt;br /&gt;
:Can quantum computers enable a stronger AI than what we have today?&lt;br /&gt;
:Can a computer have desires, a will, and emotions?&lt;br /&gt;
:Can a computer have responsibility for its behavior?&lt;br /&gt;
:Could a machine have something like a personal identity? Would I really survive if the contents of my brain were uploaded to the cloud?&lt;br /&gt;
&lt;br /&gt;
We will describe in detail how stochastic AI works, and consider these and a series of other questions at the borderlines of philosophy and AI. The class will close with presentations of papers on relevant topics given by students.&lt;br /&gt;
&lt;br /&gt;
Some of the material for this class is derived from our book&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Why Machines Will Never Rule the World: Artificial Intelligence without Fear&#039;&#039; (2nd edition, Routledge 2025).&lt;br /&gt;
and from the companion volume:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Symposium on Why Machines Will Never Rule the World&#039;&#039; — Guest editor, Janna Hastings, University of Zurich&lt;br /&gt;
which appeared as a special issue of the public access journal Cosmos + Taxis in early 2024.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Faculty&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe is the founder and CEO of Cognotekt, GmBH, an AI company based in Cologne specialised in the design and implementation of holistic AI solutions. He has 20 years experience in the AI field, 8 years as a management consultant and software architect. He has also worked as a physician and mathematician, and he views AI itself -- to the extent that it is not an elaborate hype -- as a branch of applied mathematics. CUrrently his primary focus is in the biomathematics of cancer.&lt;br /&gt;
&lt;br /&gt;
Barry Smith is one of the world&#039;s most widely cited philosophers. He has contributed primarily to the field of applied ontology, which means applying philosophical ideas derived from analytical metaphysics to the concrete practical problems which arise where attempts are made to compare or combine heterogeneous bodies of data.&lt;br /&gt;
&lt;br /&gt;
Grading&lt;br /&gt;
&lt;br /&gt;
Essay with presentation: 70%&lt;br /&gt;
Essay with no presentation: 85%&lt;br /&gt;
Presentation: 15%&lt;br /&gt;
Class Participation 15%&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Draft Schedule&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
:1.	Monday, May 4 (13:30-16:15) Introduction&lt;br /&gt;
:2.	Tuesday May 5 (09:30-12:15) Limits and Dangers of AI?&lt;br /&gt;
:3.	Wednesday, May 6 (09:30 - 12:15) Transhumanism and digital immortality&lt;br /&gt;
:4.	Thursday, May 7 20 (09:30 - 12:15) Can a machine be conscious?&lt;br /&gt;
:5.	Friday, May 8 (13:30 - 16:15) Are we living in a simulation&lt;br /&gt;
:6.	Tuesday, May 12 (09:30 - 12:15) An introduction to the statistical foundations of AI&lt;br /&gt;
:7.	Wednesday May 13 (09:30 - 12:15) Explicit, implicit, practical, personal and tacit knowledge&lt;br /&gt;
:8.	Friday May 15 (09:30-12:15) Are We Living in a Simulation?&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Program&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==1. Monday, May 4 (13:30-16:15) Introduction==&lt;br /&gt;
Part 1: An introduction to the course: Impact of philosophy on AI; impact of AI on philosophy&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
Video&lt;br /&gt;
Why Machines Will Never Rule the World&lt;br /&gt;
Part 2: What are the essential marks of human intelligence?&lt;br /&gt;
&lt;br /&gt;
The classical psychological definitions of intelligence are:  &lt;br /&gt;
&lt;br /&gt;
A. the ability to adapt to new situations (applies both to humans and to animals) &lt;br /&gt;
B. a very general mental capability (possessed only by humans) that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience &lt;br /&gt;
Can a machine be intelligent in either of these senses?&lt;br /&gt;
&lt;br /&gt;
Slides on IQ tests&lt;br /&gt;
Readings:&lt;br /&gt;
&lt;br /&gt;
Linda S. Gottfredson. Mainstream Science on Intelligence. In: Intelligence 24 (1997), pp. 13–23.&lt;br /&gt;
Jobst Landgrebe and Barry Smith: There is no Artificial General Intelligence&lt;br /&gt;
Jobst Landgrebe: Deep reasoning, abstraction and planning&lt;br /&gt;
Background: Ersatz Definitions, Anthropomorphisms, and Pareidolia&lt;br /&gt;
&lt;br /&gt;
There&#039;s no &#039;I&#039; in &#039;AI&#039;, Steven Pemberton, Amsterdam, December 12, 2024&lt;br /&gt;
1. Esatz definitions: using words like &#039;thinks&#039; as in &#039;the machine is thinking&#039;, but with meanings quite different from those we use when talking about human beings. As when we define &#039;flying&#039; as moving through the air, and then jumping up and down and saying &amp;quot;look, I&#039;m flying!&amp;quot;&lt;br /&gt;
2. Pareidolia: a psychological phenomenon that causes people to see patterns, objects, or meaning in ambiguous or unrelated stimuli&lt;br /&gt;
3. If you can&#039;t spot irony, you&#039;re not intelligent&lt;br /&gt;
Tuesday February 18 (09:30-12:15) Limits and Dangers of AI?&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the technical fundamentals of AI: Methods, mathematics, usage as well&lt;br /&gt;
&lt;br /&gt;
2. Outlines the theory of complex systems documented in our book&lt;br /&gt;
&lt;br /&gt;
3. Shows why AI cannot model complex systems adequately and synoptically, and why they therefore cannot reach a level of intelligence equal to that of human beings.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Will AI Destroy Humanity? A Soho Forum Debate (Spoiler: Jobst won)&lt;br /&gt;
&lt;br /&gt;
R.V. Yampolskii, AI: Unexplainable, Unpredictable, Uncontrollable&lt;br /&gt;
&lt;br /&gt;
Arvind Narayanan and Sayash Kapoor, AI Snake Oil&lt;br /&gt;
&lt;br /&gt;
Arnold Schelsky, The Hype Book, especially Chapter 1.&lt;br /&gt;
&lt;br /&gt;
==2	Tuesday May 5 (09:30-12:15) Limits and Dangers of AI? ==&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the full spectrum of transhumanism and its cultural origins.&lt;br /&gt;
&lt;br /&gt;
2. Debunk the feasibility of radically improving human beings via technology.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
TESCREALISM, or: why AI gods are so passionate about creating Artificial General Intelligence&lt;br /&gt;
Considering the existential risk of Artificial Superintelligence&lt;br /&gt;
Thursday, February 20 (9:30 - 12:15) Can a machine be conscious?&lt;br /&gt;
The machine will&lt;br /&gt;
&lt;br /&gt;
Computers cannot have a will, because computers don&#039;t give a damn. Therefore there can be no machine ethics&lt;br /&gt;
&lt;br /&gt;
The lack of the giving-a-damn-factor is taken by Yann LeCun as a reason to reject the idea that AI might pose an existential risk to humanity – an AI will have no desire for self-preservation “Almost half of CEOs fear A.I. could destroy humanity five to 10 years from now — but ‘A.I. godfather&#039; says an existential threat is ‘preposterously ridiculous’” Fortune, June 15, 2023. See also here.&lt;br /&gt;
Implications of the absence of a machine will:&lt;br /&gt;
&lt;br /&gt;
The problem of the singularity (when machines will take over from humans) will not arise&lt;br /&gt;
The idea of digital immortality will never be realized Slides&lt;br /&gt;
The idea that human beings are simulations can be rejected&lt;br /&gt;
There can be no AI ethics (only: ethics governing human beings when they use AI)&lt;br /&gt;
Fermi&#039;s paradox is solved&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Searle&#039;s Chinese Room Argument&lt;br /&gt;
Machines cannot have intentionality; they cannot have experiences which are about something.&lt;br /&gt;
&lt;br /&gt;
Searle: Minds, Brains, and Programs&lt;br /&gt;
&lt;br /&gt;
==3.	Wednesday, May 6 (09:30 - 12:15) Transhumanism and digital immortality Are we living in a simulation?==&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
The Fermi Paradox&lt;br /&gt;
&lt;br /&gt;
Bostrom&#039;s Simulation Argument&lt;br /&gt;
&lt;br /&gt;
Background&lt;br /&gt;
&lt;br /&gt;
David Chalmers, Reality+&lt;br /&gt;
&lt;br /&gt;
Dialog with Chalmers avatar&lt;br /&gt;
&lt;br /&gt;
==Thursday, May 7 (09:30 - 12:15) Can a machine be conscious? An introduction to the statistical foundations of AI==&lt;br /&gt;
An introduction to the statistical foundations of AI&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
The types of AI&lt;br /&gt;
&lt;br /&gt;
Deterministic AI&lt;br /&gt;
Good old fashioned AI (GOFAI)&lt;br /&gt;
Basic stochastic AI&lt;br /&gt;
How regression works&lt;br /&gt;
Advanced stochastic AI&lt;br /&gt;
Neural networks and deep learning&lt;br /&gt;
Hybrid&lt;br /&gt;
Neurosymbolic AI&lt;br /&gt;
Background&lt;br /&gt;
Why machines will never rule the world, chapter 7 (chapter 8 of 2nd edition)&lt;br /&gt;
&lt;br /&gt;
==5.	Friday, May 8 (13:30 - 16:15) Are we living in a simulation==&lt;br /&gt;
Personal knowledge&lt;br /&gt;
&lt;br /&gt;
Explicit, implicit, practical, personal and tacit knowledge&lt;br /&gt;
Video&lt;br /&gt;
Knowing how vs Knowing that&lt;br /&gt;
Personal knowledge and science&lt;br /&gt;
Creativity&lt;br /&gt;
Empathy&lt;br /&gt;
Entrepreneurship&lt;br /&gt;
Leadership and control (and ruling the world)&lt;br /&gt;
Complex Systems and Cognitive Science: Why the Replication Problem is here to stay&lt;br /&gt;
&lt;br /&gt;
The &#039;replication problem&#039; is the the inability of scientific communities to independently confirm the results of scientific work. Much has been written on this problem especially as it arises in (social) psychology, and on potential solutions under the heading of &#039;open science&#039;. But we will see that the replication problem has plagued medicine as a positive science since its beginnings (Virchov and Pasteur). This problem has become worse over the last 30 years and has massive consequences for healthcare practice and policy.&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
==Friday May 2 (13:30-16:30) Are We Living in a Simulation?==&lt;br /&gt;
Are we living in a simulation?, Slides&lt;br /&gt;
Video&lt;br /&gt;
The Future of Artificial Intelligence, Slides&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Background Material&lt;br /&gt;
An Introduction to AI for Philosophers&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
Slides&lt;br /&gt;
(AI experts are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
An Introduction to Philosophy for Computer Scientists&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
Slides&lt;br /&gt;
(Philosophers are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
John McCarthy, &amp;quot;What has AI in common with philosophy?&amp;quot;&lt;/div&gt;</summary>
		<author><name>Phismith</name></author>
	</entry>
	<entry>
		<id>https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75518</id>
		<title>Philosophy and Artificial Intelligence 2026</title>
		<link rel="alternate" type="text/html" href="https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75518"/>
		<updated>2026-01-20T14:47:22Z</updated>

		<summary type="html">&lt;p&gt;Phismith: /* 2	Tuesday May 5 (09:30-12:15) Limits and Dangers of AI? Transhumanism and digital immortality */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&#039;&#039;&#039;Philosophy and Artificial Intelligence 2026&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe and Barry Smith&lt;br /&gt;
&lt;br /&gt;
MAP, USI, Lugano, Spring 2026&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;FIRST DRAFT VERSION&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Introduction&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Artificial Intelligence (AI) is the subfield of Computer Science devoted to developing programs that enable computers to display behavior that can (broadly) be characterized as intelligent. On the strong version, the ultimate goal of AI is to create what is called General Artificial Intelligence (AGI), by which is meant an artificial system that is as intelligent as a human being.&lt;br /&gt;
&lt;br /&gt;
Since its inception in the middle of the last century AI has enjoyed repeated cycles of enthusiasm and disappointment (AI summers and winters). Recent successes of ChatGPT and other Large Language Models (LLMs) have opened a new era of popularization of AI. For the first time, AI tools have been created which are immediately available to the wider population, who for the first time can have real hands-on experience of what AI can do.&lt;br /&gt;
&lt;br /&gt;
These developments in AI open up a series of questions such as:&lt;br /&gt;
&lt;br /&gt;
:Will the powers of AI continue to grow in the future, and if so will they ever reach the point where they can be said to have intelligence equivalent to or greater than that of a human being?&lt;br /&gt;
:Could we ever reach the point where we can accept the thesis that an AI system could have something like consciousness or sentience?&lt;br /&gt;
:Could we reach the point where an AI system could be said to behave ethically, or to have responsibility for its actions?&lt;br /&gt;
:Can quantum computers enable a stronger AI than what we have today?&lt;br /&gt;
:Can a computer have desires, a will, and emotions?&lt;br /&gt;
:Can a computer have responsibility for its behavior?&lt;br /&gt;
:Could a machine have something like a personal identity? Would I really survive if the contents of my brain were uploaded to the cloud?&lt;br /&gt;
&lt;br /&gt;
We will describe in detail how stochastic AI works, and consider these and a series of other questions at the borderlines of philosophy and AI. The class will close with presentations of papers on relevant topics given by students.&lt;br /&gt;
&lt;br /&gt;
Some of the material for this class is derived from our book&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Why Machines Will Never Rule the World: Artificial Intelligence without Fear&#039;&#039; (2nd edition, Routledge 2025).&lt;br /&gt;
and from the companion volume:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Symposium on Why Machines Will Never Rule the World&#039;&#039; — Guest editor, Janna Hastings, University of Zurich&lt;br /&gt;
which appeared as a special issue of the public access journal Cosmos + Taxis in early 2024.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Faculty&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe is the founder and CEO of Cognotekt, GmBH, an AI company based in Cologne specialised in the design and implementation of holistic AI solutions. He has 20 years experience in the AI field, 8 years as a management consultant and software architect. He has also worked as a physician and mathematician, and he views AI itself -- to the extent that it is not an elaborate hype -- as a branch of applied mathematics. CUrrently his primary focus is in the biomathematics of cancer.&lt;br /&gt;
&lt;br /&gt;
Barry Smith is one of the world&#039;s most widely cited philosophers. He has contributed primarily to the field of applied ontology, which means applying philosophical ideas derived from analytical metaphysics to the concrete practical problems which arise where attempts are made to compare or combine heterogeneous bodies of data.&lt;br /&gt;
&lt;br /&gt;
Grading&lt;br /&gt;
&lt;br /&gt;
Essay with presentation: 70%&lt;br /&gt;
Essay with no presentation: 85%&lt;br /&gt;
Presentation: 15%&lt;br /&gt;
Class Participation 15%&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Draft Schedule&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
:1.	Monday, May 4 (13:30-16:15) Introduction&lt;br /&gt;
:2.	Tuesday May 5 (09:30-12:15) Limits and Dangers of AI?&lt;br /&gt;
:3.	Wednesday, May 6 (09:30 - 12:15) Transhumanism and digital immortality&lt;br /&gt;
:4.	Thursday, May 7 20 (09:30 - 12:15) Can a machine be conscious?&lt;br /&gt;
:5.	Friday, May 8 (13:30 - 16:15) Are we living in a simulation&lt;br /&gt;
:6.	Tuesday, May 12 (09:30 - 12:15) An introduction to the statistical foundations of AI&lt;br /&gt;
:7.	Wednesday May 13 (09:30 - 12:15) Explicit, implicit, practical, personal and tacit knowledge&lt;br /&gt;
:8.	Friday May 15 (09:30-12:15) Are We Living in a Simulation?&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Program&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==1. Monday, May 4 (13:30-16:15) Introduction==&lt;br /&gt;
Part 1: An introduction to the course: Impact of philosophy on AI; impact of AI on philosophy&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
Video&lt;br /&gt;
Why Machines Will Never Rule the World&lt;br /&gt;
Part 2: What are the essential marks of human intelligence?&lt;br /&gt;
&lt;br /&gt;
The classical psychological definitions of intelligence are:  &lt;br /&gt;
&lt;br /&gt;
A. the ability to adapt to new situations (applies both to humans and to animals) &lt;br /&gt;
B. a very general mental capability (possessed only by humans) that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience &lt;br /&gt;
Can a machine be intelligent in either of these senses?&lt;br /&gt;
&lt;br /&gt;
Slides on IQ tests&lt;br /&gt;
Readings:&lt;br /&gt;
&lt;br /&gt;
Linda S. Gottfredson. Mainstream Science on Intelligence. In: Intelligence 24 (1997), pp. 13–23.&lt;br /&gt;
Jobst Landgrebe and Barry Smith: There is no Artificial General Intelligence&lt;br /&gt;
Jobst Landgrebe: Deep reasoning, abstraction and planning&lt;br /&gt;
Background: Ersatz Definitions, Anthropomorphisms, and Pareidolia&lt;br /&gt;
&lt;br /&gt;
There&#039;s no &#039;I&#039; in &#039;AI&#039;, Steven Pemberton, Amsterdam, December 12, 2024&lt;br /&gt;
1. Esatz definitions: using words like &#039;thinks&#039; as in &#039;the machine is thinking&#039;, but with meanings quite different from those we use when talking about human beings. As when we define &#039;flying&#039; as moving through the air, and then jumping up and down and saying &amp;quot;look, I&#039;m flying!&amp;quot;&lt;br /&gt;
2. Pareidolia: a psychological phenomenon that causes people to see patterns, objects, or meaning in ambiguous or unrelated stimuli&lt;br /&gt;
3. If you can&#039;t spot irony, you&#039;re not intelligent&lt;br /&gt;
Tuesday February 18 (09:30-12:15) Limits and Dangers of AI?&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the technical fundamentals of AI: Methods, mathematics, usage as well&lt;br /&gt;
&lt;br /&gt;
2. Outlines the theory of complex systems documented in our book&lt;br /&gt;
&lt;br /&gt;
3. Shows why AI cannot model complex systems adequately and synoptically, and why they therefore cannot reach a level of intelligence equal to that of human beings.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Will AI Destroy Humanity? A Soho Forum Debate (Spoiler: Jobst won)&lt;br /&gt;
&lt;br /&gt;
R.V. Yampolskii, AI: Unexplainable, Unpredictable, Uncontrollable&lt;br /&gt;
&lt;br /&gt;
Arvind Narayanan and Sayash Kapoor, AI Snake Oil&lt;br /&gt;
&lt;br /&gt;
Arnold Schelsky, The Hype Book, especially Chapter 1.&lt;br /&gt;
&lt;br /&gt;
==2	Tuesday May 5 (09:30-12:15) Limits and Dangers of AI? ==&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the full spectrum of transhumanism and its cultural origins.&lt;br /&gt;
&lt;br /&gt;
2. Debunk the feasibility of radically improving human beings via technology.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
TESCREALISM, or: why AI gods are so passionate about creating Artificial General Intelligence&lt;br /&gt;
Considering the existential risk of Artificial Superintelligence&lt;br /&gt;
Thursday, February 20 (9:30 - 12:15) Can a machine be conscious?&lt;br /&gt;
The machine will&lt;br /&gt;
&lt;br /&gt;
Computers cannot have a will, because computers don&#039;t give a damn. Therefore there can be no machine ethics&lt;br /&gt;
&lt;br /&gt;
The lack of the giving-a-damn-factor is taken by Yann LeCun as a reason to reject the idea that AI might pose an existential risk to humanity – an AI will have no desire for self-preservation “Almost half of CEOs fear A.I. could destroy humanity five to 10 years from now — but ‘A.I. godfather&#039; says an existential threat is ‘preposterously ridiculous’” Fortune, June 15, 2023. See also here.&lt;br /&gt;
Implications of the absence of a machine will:&lt;br /&gt;
&lt;br /&gt;
The problem of the singularity (when machines will take over from humans) will not arise&lt;br /&gt;
The idea of digital immortality will never be realized Slides&lt;br /&gt;
The idea that human beings are simulations can be rejected&lt;br /&gt;
There can be no AI ethics (only: ethics governing human beings when they use AI)&lt;br /&gt;
Fermi&#039;s paradox is solved&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Searle&#039;s Chinese Room Argument&lt;br /&gt;
Machines cannot have intentionality; they cannot have experiences which are about something.&lt;br /&gt;
&lt;br /&gt;
Searle: Minds, Brains, and Programs&lt;br /&gt;
&lt;br /&gt;
==3.	Wednesday, May 6 (09:30 - 12:15) Transhumanism and digital immortality Are we living in a simulation?==&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
The Fermi Paradox&lt;br /&gt;
&lt;br /&gt;
Bostrom&#039;s Simulation Argument&lt;br /&gt;
&lt;br /&gt;
Background&lt;br /&gt;
&lt;br /&gt;
David Chalmers, Reality+&lt;br /&gt;
&lt;br /&gt;
Dialog with Chalmers avatar&lt;br /&gt;
&lt;br /&gt;
==Thursday, May 7 20 (09:30 - 12:15) Can a machine be conscious? An introduction to the statistical foundations of AI==&lt;br /&gt;
An introduction to the statistical foundations of AI&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
The types of AI&lt;br /&gt;
&lt;br /&gt;
Deterministic AI&lt;br /&gt;
Good old fashioned AI (GOFAI)&lt;br /&gt;
Basic stochastic AI&lt;br /&gt;
How regression works&lt;br /&gt;
Advanced stochastic AI&lt;br /&gt;
Neural networks and deep learning&lt;br /&gt;
Hybrid&lt;br /&gt;
Neurosymbolic AI&lt;br /&gt;
Background&lt;br /&gt;
Why machines will never rule the world, chapter 7 (chapter 8 of 2nd edition)&lt;br /&gt;
&lt;br /&gt;
==5.	Friday, May 8 (13:30 - 16:15) Are we living in a simulation==&lt;br /&gt;
Personal knowledge&lt;br /&gt;
&lt;br /&gt;
Explicit, implicit, practical, personal and tacit knowledge&lt;br /&gt;
Video&lt;br /&gt;
Knowing how vs Knowing that&lt;br /&gt;
Personal knowledge and science&lt;br /&gt;
Creativity&lt;br /&gt;
Empathy&lt;br /&gt;
Entrepreneurship&lt;br /&gt;
Leadership and control (and ruling the world)&lt;br /&gt;
Complex Systems and Cognitive Science: Why the Replication Problem is here to stay&lt;br /&gt;
&lt;br /&gt;
The &#039;replication problem&#039; is the the inability of scientific communities to independently confirm the results of scientific work. Much has been written on this problem especially as it arises in (social) psychology, and on potential solutions under the heading of &#039;open science&#039;. But we will see that the replication problem has plagued medicine as a positive science since its beginnings (Virchov and Pasteur). This problem has become worse over the last 30 years and has massive consequences for healthcare practice and policy.&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
==Friday May 2 (13:30-16:30) Are We Living in a Simulation?==&lt;br /&gt;
Are we living in a simulation?, Slides&lt;br /&gt;
Video&lt;br /&gt;
The Future of Artificial Intelligence, Slides&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Background Material&lt;br /&gt;
An Introduction to AI for Philosophers&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
Slides&lt;br /&gt;
(AI experts are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
An Introduction to Philosophy for Computer Scientists&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
Slides&lt;br /&gt;
(Philosophers are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
John McCarthy, &amp;quot;What has AI in common with philosophy?&amp;quot;&lt;/div&gt;</summary>
		<author><name>Phismith</name></author>
	</entry>
	<entry>
		<id>https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75517</id>
		<title>Philosophy and Artificial Intelligence 2026</title>
		<link rel="alternate" type="text/html" href="https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75517"/>
		<updated>2026-01-20T14:46:49Z</updated>

		<summary type="html">&lt;p&gt;Phismith: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&#039;&#039;&#039;Philosophy and Artificial Intelligence 2026&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe and Barry Smith&lt;br /&gt;
&lt;br /&gt;
MAP, USI, Lugano, Spring 2026&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;FIRST DRAFT VERSION&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Introduction&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Artificial Intelligence (AI) is the subfield of Computer Science devoted to developing programs that enable computers to display behavior that can (broadly) be characterized as intelligent. On the strong version, the ultimate goal of AI is to create what is called General Artificial Intelligence (AGI), by which is meant an artificial system that is as intelligent as a human being.&lt;br /&gt;
&lt;br /&gt;
Since its inception in the middle of the last century AI has enjoyed repeated cycles of enthusiasm and disappointment (AI summers and winters). Recent successes of ChatGPT and other Large Language Models (LLMs) have opened a new era of popularization of AI. For the first time, AI tools have been created which are immediately available to the wider population, who for the first time can have real hands-on experience of what AI can do.&lt;br /&gt;
&lt;br /&gt;
These developments in AI open up a series of questions such as:&lt;br /&gt;
&lt;br /&gt;
:Will the powers of AI continue to grow in the future, and if so will they ever reach the point where they can be said to have intelligence equivalent to or greater than that of a human being?&lt;br /&gt;
:Could we ever reach the point where we can accept the thesis that an AI system could have something like consciousness or sentience?&lt;br /&gt;
:Could we reach the point where an AI system could be said to behave ethically, or to have responsibility for its actions?&lt;br /&gt;
:Can quantum computers enable a stronger AI than what we have today?&lt;br /&gt;
:Can a computer have desires, a will, and emotions?&lt;br /&gt;
:Can a computer have responsibility for its behavior?&lt;br /&gt;
:Could a machine have something like a personal identity? Would I really survive if the contents of my brain were uploaded to the cloud?&lt;br /&gt;
&lt;br /&gt;
We will describe in detail how stochastic AI works, and consider these and a series of other questions at the borderlines of philosophy and AI. The class will close with presentations of papers on relevant topics given by students.&lt;br /&gt;
&lt;br /&gt;
Some of the material for this class is derived from our book&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Why Machines Will Never Rule the World: Artificial Intelligence without Fear&#039;&#039; (2nd edition, Routledge 2025).&lt;br /&gt;
and from the companion volume:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Symposium on Why Machines Will Never Rule the World&#039;&#039; — Guest editor, Janna Hastings, University of Zurich&lt;br /&gt;
which appeared as a special issue of the public access journal Cosmos + Taxis in early 2024.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Faculty&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe is the founder and CEO of Cognotekt, GmBH, an AI company based in Cologne specialised in the design and implementation of holistic AI solutions. He has 20 years experience in the AI field, 8 years as a management consultant and software architect. He has also worked as a physician and mathematician, and he views AI itself -- to the extent that it is not an elaborate hype -- as a branch of applied mathematics. CUrrently his primary focus is in the biomathematics of cancer.&lt;br /&gt;
&lt;br /&gt;
Barry Smith is one of the world&#039;s most widely cited philosophers. He has contributed primarily to the field of applied ontology, which means applying philosophical ideas derived from analytical metaphysics to the concrete practical problems which arise where attempts are made to compare or combine heterogeneous bodies of data.&lt;br /&gt;
&lt;br /&gt;
Grading&lt;br /&gt;
&lt;br /&gt;
Essay with presentation: 70%&lt;br /&gt;
Essay with no presentation: 85%&lt;br /&gt;
Presentation: 15%&lt;br /&gt;
Class Participation 15%&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Draft Schedule&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
:1.	Monday, May 4 (13:30-16:15) Introduction&lt;br /&gt;
:2.	Tuesday May 5 (09:30-12:15) Limits and Dangers of AI?&lt;br /&gt;
:3.	Wednesday, May 6 (09:30 - 12:15) Transhumanism and digital immortality&lt;br /&gt;
:4.	Thursday, May 7 20 (09:30 - 12:15) Can a machine be conscious?&lt;br /&gt;
:5.	Friday, May 8 (13:30 - 16:15) Are we living in a simulation&lt;br /&gt;
:6.	Tuesday, May 12 (09:30 - 12:15) An introduction to the statistical foundations of AI&lt;br /&gt;
:7.	Wednesday May 13 (09:30 - 12:15) Explicit, implicit, practical, personal and tacit knowledge&lt;br /&gt;
:8.	Friday May 15 (09:30-12:15) Are We Living in a Simulation?&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Program&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==1. Monday, May 4 (13:30-16:15) Introduction==&lt;br /&gt;
Part 1: An introduction to the course: Impact of philosophy on AI; impact of AI on philosophy&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
Video&lt;br /&gt;
Why Machines Will Never Rule the World&lt;br /&gt;
Part 2: What are the essential marks of human intelligence?&lt;br /&gt;
&lt;br /&gt;
The classical psychological definitions of intelligence are:  &lt;br /&gt;
&lt;br /&gt;
A. the ability to adapt to new situations (applies both to humans and to animals) &lt;br /&gt;
B. a very general mental capability (possessed only by humans) that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience &lt;br /&gt;
Can a machine be intelligent in either of these senses?&lt;br /&gt;
&lt;br /&gt;
Slides on IQ tests&lt;br /&gt;
Readings:&lt;br /&gt;
&lt;br /&gt;
Linda S. Gottfredson. Mainstream Science on Intelligence. In: Intelligence 24 (1997), pp. 13–23.&lt;br /&gt;
Jobst Landgrebe and Barry Smith: There is no Artificial General Intelligence&lt;br /&gt;
Jobst Landgrebe: Deep reasoning, abstraction and planning&lt;br /&gt;
Background: Ersatz Definitions, Anthropomorphisms, and Pareidolia&lt;br /&gt;
&lt;br /&gt;
There&#039;s no &#039;I&#039; in &#039;AI&#039;, Steven Pemberton, Amsterdam, December 12, 2024&lt;br /&gt;
1. Esatz definitions: using words like &#039;thinks&#039; as in &#039;the machine is thinking&#039;, but with meanings quite different from those we use when talking about human beings. As when we define &#039;flying&#039; as moving through the air, and then jumping up and down and saying &amp;quot;look, I&#039;m flying!&amp;quot;&lt;br /&gt;
2. Pareidolia: a psychological phenomenon that causes people to see patterns, objects, or meaning in ambiguous or unrelated stimuli&lt;br /&gt;
3. If you can&#039;t spot irony, you&#039;re not intelligent&lt;br /&gt;
Tuesday February 18 (09:30-12:15) Limits and Dangers of AI?&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the technical fundamentals of AI: Methods, mathematics, usage as well&lt;br /&gt;
&lt;br /&gt;
2. Outlines the theory of complex systems documented in our book&lt;br /&gt;
&lt;br /&gt;
3. Shows why AI cannot model complex systems adequately and synoptically, and why they therefore cannot reach a level of intelligence equal to that of human beings.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Will AI Destroy Humanity? A Soho Forum Debate (Spoiler: Jobst won)&lt;br /&gt;
&lt;br /&gt;
R.V. Yampolskii, AI: Unexplainable, Unpredictable, Uncontrollable&lt;br /&gt;
&lt;br /&gt;
Arvind Narayanan and Sayash Kapoor, AI Snake Oil&lt;br /&gt;
&lt;br /&gt;
Arnold Schelsky, The Hype Book, especially Chapter 1.&lt;br /&gt;
&lt;br /&gt;
==2	Tuesday May 5 (09:30-12:15) Limits and Dangers of AI? Transhumanism and digital immortality==&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the full spectrum of transhumanism and its cultural origins.&lt;br /&gt;
&lt;br /&gt;
2. Debunk the feasibility of radically improving human beings via technology.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
TESCREALISM, or: why AI gods are so passionate about creating Artificial General Intelligence&lt;br /&gt;
Considering the existential risk of Artificial Superintelligence&lt;br /&gt;
Thursday, February 20 (9:30 - 12:15) Can a machine be conscious?&lt;br /&gt;
The machine will&lt;br /&gt;
&lt;br /&gt;
Computers cannot have a will, because computers don&#039;t give a damn. Therefore there can be no machine ethics&lt;br /&gt;
&lt;br /&gt;
The lack of the giving-a-damn-factor is taken by Yann LeCun as a reason to reject the idea that AI might pose an existential risk to humanity – an AI will have no desire for self-preservation “Almost half of CEOs fear A.I. could destroy humanity five to 10 years from now — but ‘A.I. godfather&#039; says an existential threat is ‘preposterously ridiculous’” Fortune, June 15, 2023. See also here.&lt;br /&gt;
Implications of the absence of a machine will:&lt;br /&gt;
&lt;br /&gt;
The problem of the singularity (when machines will take over from humans) will not arise&lt;br /&gt;
The idea of digital immortality will never be realized Slides&lt;br /&gt;
The idea that human beings are simulations can be rejected&lt;br /&gt;
There can be no AI ethics (only: ethics governing human beings when they use AI)&lt;br /&gt;
Fermi&#039;s paradox is solved&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Searle&#039;s Chinese Room Argument&lt;br /&gt;
Machines cannot have intentionality; they cannot have experiences which are about something.&lt;br /&gt;
&lt;br /&gt;
Searle: Minds, Brains, and Programs&lt;br /&gt;
&lt;br /&gt;
==3.	Wednesday, May 6 (09:30 - 12:15) Transhumanism and digital immortality Are we living in a simulation?==&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
The Fermi Paradox&lt;br /&gt;
&lt;br /&gt;
Bostrom&#039;s Simulation Argument&lt;br /&gt;
&lt;br /&gt;
Background&lt;br /&gt;
&lt;br /&gt;
David Chalmers, Reality+&lt;br /&gt;
&lt;br /&gt;
Dialog with Chalmers avatar&lt;br /&gt;
&lt;br /&gt;
==Thursday, May 7 20 (09:30 - 12:15) Can a machine be conscious? An introduction to the statistical foundations of AI==&lt;br /&gt;
An introduction to the statistical foundations of AI&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
The types of AI&lt;br /&gt;
&lt;br /&gt;
Deterministic AI&lt;br /&gt;
Good old fashioned AI (GOFAI)&lt;br /&gt;
Basic stochastic AI&lt;br /&gt;
How regression works&lt;br /&gt;
Advanced stochastic AI&lt;br /&gt;
Neural networks and deep learning&lt;br /&gt;
Hybrid&lt;br /&gt;
Neurosymbolic AI&lt;br /&gt;
Background&lt;br /&gt;
Why machines will never rule the world, chapter 7 (chapter 8 of 2nd edition)&lt;br /&gt;
&lt;br /&gt;
==5.	Friday, May 8 (13:30 - 16:15) Are we living in a simulation==&lt;br /&gt;
Personal knowledge&lt;br /&gt;
&lt;br /&gt;
Explicit, implicit, practical, personal and tacit knowledge&lt;br /&gt;
Video&lt;br /&gt;
Knowing how vs Knowing that&lt;br /&gt;
Personal knowledge and science&lt;br /&gt;
Creativity&lt;br /&gt;
Empathy&lt;br /&gt;
Entrepreneurship&lt;br /&gt;
Leadership and control (and ruling the world)&lt;br /&gt;
Complex Systems and Cognitive Science: Why the Replication Problem is here to stay&lt;br /&gt;
&lt;br /&gt;
The &#039;replication problem&#039; is the the inability of scientific communities to independently confirm the results of scientific work. Much has been written on this problem especially as it arises in (social) psychology, and on potential solutions under the heading of &#039;open science&#039;. But we will see that the replication problem has plagued medicine as a positive science since its beginnings (Virchov and Pasteur). This problem has become worse over the last 30 years and has massive consequences for healthcare practice and policy.&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
==Friday May 2 (13:30-16:30) Are We Living in a Simulation?==&lt;br /&gt;
Are we living in a simulation?, Slides&lt;br /&gt;
Video&lt;br /&gt;
The Future of Artificial Intelligence, Slides&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Background Material&lt;br /&gt;
An Introduction to AI for Philosophers&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
Slides&lt;br /&gt;
(AI experts are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
An Introduction to Philosophy for Computer Scientists&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
Slides&lt;br /&gt;
(Philosophers are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
John McCarthy, &amp;quot;What has AI in common with philosophy?&amp;quot;&lt;/div&gt;</summary>
		<author><name>Phismith</name></author>
	</entry>
	<entry>
		<id>https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75516</id>
		<title>Philosophy and Artificial Intelligence 2026</title>
		<link rel="alternate" type="text/html" href="https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75516"/>
		<updated>2026-01-20T14:42:20Z</updated>

		<summary type="html">&lt;p&gt;Phismith: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&#039;&#039;&#039;Philosophy and Artificial Intelligence 2026&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe and Barry Smith&lt;br /&gt;
&lt;br /&gt;
MAP, USI, Lugano, Spring 2026&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;FIRST DRAFT VERSION&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Introduction&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Artificial Intelligence (AI) is the subfield of Computer Science devoted to developing programs that enable computers to display behavior that can (broadly) be characterized as intelligent. On the strong version, the ultimate goal of AI is to create what is called General Artificial Intelligence (AGI), by which is meant an artificial system that is as intelligent as a human being.&lt;br /&gt;
&lt;br /&gt;
Since its inception in the middle of the last century AI has enjoyed repeated cycles of enthusiasm and disappointment (AI summers and winters). Recent successes of ChatGPT and other Large Language Models (LLMs) have opened a new era of popularization of AI. For the first time, AI tools have been created which are immediately available to the wider population, who for the first time can have real hands-on experience of what AI can do.&lt;br /&gt;
&lt;br /&gt;
These developments in AI open up a series of questions such as:&lt;br /&gt;
&lt;br /&gt;
:Will the powers of AI continue to grow in the future, and if so will they ever reach the point where they can be said to have intelligence equivalent to or greater than that of a human being?&lt;br /&gt;
:Could we ever reach the point where we can accept the thesis that an AI system could have something like consciousness or sentience?&lt;br /&gt;
:Could we reach the point where an AI system could be said to behave ethically, or to have responsibility for its actions?&lt;br /&gt;
:Can quantum computers enable a stronger AI than what we have today?&lt;br /&gt;
:Can a computer have desires, a will, and emotions?&lt;br /&gt;
:Can a computer have responsibility for its behavior?&lt;br /&gt;
:Could a machine have something like a personal identity? Would I really survive if the contents of my brain were uploaded to the cloud?&lt;br /&gt;
&lt;br /&gt;
We will describe in detail how stochastic AI works, and consider these and a series of other questions at the borderlines of philosophy and AI. The class will close with presentations of papers on relevant topics given by students.&lt;br /&gt;
&lt;br /&gt;
Some of the material for this class is derived from our book&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Why Machines Will Never Rule the World: Artificial Intelligence without Fear&#039;&#039; (2nd edition, Routledge 2025).&lt;br /&gt;
and from the companion volume:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Symposium on Why Machines Will Never Rule the World&#039;&#039; — Guest editor, Janna Hastings, University of Zurich&lt;br /&gt;
which appeared as a special issue of the public access journal Cosmos + Taxis in early 2024.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Faculty&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe is the founder and CEO of Cognotekt, GmBH, an AI company based in Cologne specialised in the design and implementation of holistic AI solutions. He has 20 years experience in the AI field, 8 years as a management consultant and software architect. He has also worked as a physician and mathematician, and he views AI itself -- to the extent that it is not an elaborate hype -- as a branch of applied mathematics. CUrrently his primary focus is in the biomathematics of cancer.&lt;br /&gt;
&lt;br /&gt;
Barry Smith is one of the world&#039;s most widely cited philosophers. He has contributed primarily to the field of applied ontology, which means applying philosophical ideas derived from analytical metaphysics to the concrete practical problems which arise where attempts are made to compare or combine heterogeneous bodies of data.&lt;br /&gt;
&lt;br /&gt;
Grading&lt;br /&gt;
&lt;br /&gt;
Essay with presentation: 70%&lt;br /&gt;
Essay with no presentation: 85%&lt;br /&gt;
Presentation: 15%&lt;br /&gt;
Class Participation 15%&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Draft Schedule&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
:1	Monday, May 4 (13:30-16:15) Introduction&lt;br /&gt;
:2	Tuesday May 5 (09:30-12:15) Limits and Dangers of AI?&lt;br /&gt;
:3	Wednesday, May 6 (09:30 - 12:15) Transhumanism and digital immortality&lt;br /&gt;
:4	Thursday, May 7 20 (09:30 - 12:15) Can a machine be conscious?&lt;br /&gt;
:5	Friday, May 8 (13:30 - 16:15) Are we living in a simulation&lt;br /&gt;
:6	Tuesday, May 12 (09:30 - 12:15) An introduction to the statistical foundations of AI&lt;br /&gt;
:7	Wednesday May 13 (09:30 - 12:15) Explicit, implicit, practical, personal and tacit knowledge&lt;br /&gt;
:8	Friday May 15 (09:30-12:15) Are We Living in a Simulation?&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Program&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==1. Monday, May 4 (13:30-16:15) Introduction==&lt;br /&gt;
Part 1: An introduction to the course: Impact of philosophy on AI; impact of AI on philosophy&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
Video&lt;br /&gt;
Why Machines Will Never Rule the World&lt;br /&gt;
Part 2: What are the essential marks of human intelligence?&lt;br /&gt;
&lt;br /&gt;
The classical psychological definitions of intelligence are:  &lt;br /&gt;
&lt;br /&gt;
A. the ability to adapt to new situations (applies both to humans and to animals) &lt;br /&gt;
B. a very general mental capability (possessed only by humans) that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience &lt;br /&gt;
Can a machine be intelligent in either of these senses?&lt;br /&gt;
&lt;br /&gt;
Slides on IQ tests&lt;br /&gt;
Readings:&lt;br /&gt;
&lt;br /&gt;
Linda S. Gottfredson. Mainstream Science on Intelligence. In: Intelligence 24 (1997), pp. 13–23.&lt;br /&gt;
Jobst Landgrebe and Barry Smith: There is no Artificial General Intelligence&lt;br /&gt;
Jobst Landgrebe: Deep reasoning, abstraction and planning&lt;br /&gt;
Background: Ersatz Definitions, Anthropomorphisms, and Pareidolia&lt;br /&gt;
&lt;br /&gt;
There&#039;s no &#039;I&#039; in &#039;AI&#039;, Steven Pemberton, Amsterdam, December 12, 2024&lt;br /&gt;
1. Esatz definitions: using words like &#039;thinks&#039; as in &#039;the machine is thinking&#039;, but with meanings quite different from those we use when talking about human beings. As when we define &#039;flying&#039; as moving through the air, and then jumping up and down and saying &amp;quot;look, I&#039;m flying!&amp;quot;&lt;br /&gt;
2. Pareidolia: a psychological phenomenon that causes people to see patterns, objects, or meaning in ambiguous or unrelated stimuli&lt;br /&gt;
3. If you can&#039;t spot irony, you&#039;re not intelligent&lt;br /&gt;
Tuesday February 18 (09:30-12:15) Limits and Dangers of AI?&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the technical fundamentals of AI: Methods, mathematics, usage as well&lt;br /&gt;
&lt;br /&gt;
2. Outlines the theory of complex systems documented in our book&lt;br /&gt;
&lt;br /&gt;
3. Shows why AI cannot model complex systems adequately and synoptically, and why they therefore cannot reach a level of intelligence equal to that of human beings.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Will AI Destroy Humanity? A Soho Forum Debate (Spoiler: Jobst won)&lt;br /&gt;
&lt;br /&gt;
R.V. Yampolskii, AI: Unexplainable, Unpredictable, Uncontrollable&lt;br /&gt;
&lt;br /&gt;
Arvind Narayanan and Sayash Kapoor, AI Snake Oil&lt;br /&gt;
&lt;br /&gt;
Arnold Schelsky, The Hype Book, especially Chapter 1.&lt;br /&gt;
&lt;br /&gt;
==2	Tuesday May 5 (09:30-12:15) Limits and Dangers of AI? Transhumanism and digital immortality==&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the full spectrum of transhumanism and its cultural origins.&lt;br /&gt;
&lt;br /&gt;
2. Debunk the feasibility of radically improving human beings via technology.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
TESCREALISM, or: why AI gods are so passionate about creating Artificial General Intelligence&lt;br /&gt;
Considering the existential risk of Artificial Superintelligence&lt;br /&gt;
Thursday, February 20 (9:30 - 12:15) Can a machine be conscious?&lt;br /&gt;
The machine will&lt;br /&gt;
&lt;br /&gt;
Computers cannot have a will, because computers don&#039;t give a damn. Therefore there can be no machine ethics&lt;br /&gt;
&lt;br /&gt;
The lack of the giving-a-damn-factor is taken by Yann LeCun as a reason to reject the idea that AI might pose an existential risk to humanity – an AI will have no desire for self-preservation “Almost half of CEOs fear A.I. could destroy humanity five to 10 years from now — but ‘A.I. godfather&#039; says an existential threat is ‘preposterously ridiculous’” Fortune, June 15, 2023. See also here.&lt;br /&gt;
Implications of the absence of a machine will:&lt;br /&gt;
&lt;br /&gt;
The problem of the singularity (when machines will take over from humans) will not arise&lt;br /&gt;
The idea of digital immortality will never be realized Slides&lt;br /&gt;
The idea that human beings are simulations can be rejected&lt;br /&gt;
There can be no AI ethics (only: ethics governing human beings when they use AI)&lt;br /&gt;
Fermi&#039;s paradox is solved&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Searle&#039;s Chinese Room Argument&lt;br /&gt;
Machines cannot have intentionality; they cannot have experiences which are about something.&lt;br /&gt;
&lt;br /&gt;
Searle: Minds, Brains, and Programs&lt;br /&gt;
Monday, April 28 (14:30 - 17:30) Are we living in a simulation&lt;br /&gt;
Are we living in a simulation?&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
The Fermi Paradox&lt;br /&gt;
&lt;br /&gt;
Bostrom&#039;s Simulation Argument&lt;br /&gt;
&lt;br /&gt;
Background&lt;br /&gt;
&lt;br /&gt;
David Chalmers, Reality+&lt;br /&gt;
&lt;br /&gt;
Dialog with Chalmers avatar&lt;br /&gt;
&lt;br /&gt;
Tuesday, April 29 (13:30 - 16:30) An introduction to the statistical foundations of AI&lt;br /&gt;
An introduction to the statistical foundations of AI&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
The types of AI&lt;br /&gt;
&lt;br /&gt;
Deterministic AI&lt;br /&gt;
Good old fashioned AI (GOFAI)&lt;br /&gt;
Basic stochastic AI&lt;br /&gt;
How regression works&lt;br /&gt;
Advanced stochastic AI&lt;br /&gt;
Neural networks and deep learning&lt;br /&gt;
Hybrid&lt;br /&gt;
Neurosymbolic AI&lt;br /&gt;
Background&lt;br /&gt;
Why machines will never rule the world, chapter 7 (chapter 8 of 2nd edition)&lt;br /&gt;
&lt;br /&gt;
Wednesday April 30 (13:30 - 16:30) Explicit, implicit, practical, personal and tacit knowledge&lt;br /&gt;
Personal knowledge&lt;br /&gt;
&lt;br /&gt;
Explicit, implicit, practical, personal and tacit knowledge&lt;br /&gt;
Video&lt;br /&gt;
Knowing how vs Knowing that&lt;br /&gt;
Personal knowledge and science&lt;br /&gt;
Creativity&lt;br /&gt;
Empathy&lt;br /&gt;
Entrepreneurship&lt;br /&gt;
Leadership and control (and ruling the world)&lt;br /&gt;
Complex Systems and Cognitive Science: Why the Replication Problem is here to stay&lt;br /&gt;
&lt;br /&gt;
The &#039;replication problem&#039; is the the inability of scientific communities to independently confirm the results of scientific work. Much has been written on this problem especially as it arises in (social) psychology, and on potential solutions under the heading of &#039;open science&#039;. But we will see that the replication problem has plagued medicine as a positive science since its beginnings (Virchov and Pasteur). This problem has become worse over the last 30 years and has massive consequences for healthcare practice and policy.&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
Friday May 2 (13:30-16:30) Are We Living in a Simulation?&lt;br /&gt;
Are we living in a simulation?, Slides&lt;br /&gt;
Video&lt;br /&gt;
The Future of Artificial Intelligence, Slides&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Background Material&lt;br /&gt;
An Introduction to AI for Philosophers&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
Slides&lt;br /&gt;
(AI experts are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
An Introduction to Philosophy for Computer Scientists&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
Slides&lt;br /&gt;
(Philosophers are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
John McCarthy, &amp;quot;What has AI in common with philosophy?&amp;quot;&lt;/div&gt;</summary>
		<author><name>Phismith</name></author>
	</entry>
	<entry>
		<id>https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75515</id>
		<title>Philosophy and Artificial Intelligence 2026</title>
		<link rel="alternate" type="text/html" href="https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75515"/>
		<updated>2026-01-20T14:41:49Z</updated>

		<summary type="html">&lt;p&gt;Phismith: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&#039;&#039;&#039;Philosophy and Artificial Intelligence 2026&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe and Barry Smith&lt;br /&gt;
&lt;br /&gt;
MAP, USI, Lugano, Spring 2026&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;FIRST DRAFT VERSION&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Introduction&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Artificial Intelligence (AI) is the subfield of Computer Science devoted to developing programs that enable computers to display behavior that can (broadly) be characterized as intelligent. On the strong version, the ultimate goal of AI is to create what is called General Artificial Intelligence (AGI), by which is meant an artificial system that is as intelligent as a human being.&lt;br /&gt;
&lt;br /&gt;
Since its inception in the middle of the last century AI has enjoyed repeated cycles of enthusiasm and disappointment (AI summers and winters). Recent successes of ChatGPT and other Large Language Models (LLMs) have opened a new era of popularization of AI. For the first time, AI tools have been created which are immediately available to the wider population, who for the first time can have real hands-on experience of what AI can do.&lt;br /&gt;
&lt;br /&gt;
These developments in AI open up a series of questions such as:&lt;br /&gt;
&lt;br /&gt;
:Will the powers of AI continue to grow in the future, and if so will they ever reach the point where they can be said to have intelligence equivalent to or greater than that of a human being?&lt;br /&gt;
:Could we ever reach the point where we can accept the thesis that an AI system could have something like consciousness or sentience?&lt;br /&gt;
:Could we reach the point where an AI system could be said to behave ethically, or to have responsibility for its actions?&lt;br /&gt;
:Can quantum computers enable a stronger AI than what we have today?&lt;br /&gt;
:Can a computer have desires, a will, and emotions?&lt;br /&gt;
:Can a computer have responsibility for its behavior?&lt;br /&gt;
:Could a machine have something like a personal identity? Would I really survive if the contents of my brain were uploaded to the cloud?&lt;br /&gt;
&lt;br /&gt;
We will describe in detail how stochastic AI works, and consider these and a series of other questions at the borderlines of philosophy and AI. The class will close with presentations of papers on relevant topics given by students.&lt;br /&gt;
&lt;br /&gt;
Some of the material for this class is derived from our book&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Why Machines Will Never Rule the World: Artificial Intelligence without Fear&#039;&#039; (2nd edition, Routledge 2025).&lt;br /&gt;
and from the companion volume:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Symposium on Why Machines Will Never Rule the World&#039;&#039; — Guest editor, Janna Hastings, University of Zurich&lt;br /&gt;
which appeared as a special issue of the public access journal Cosmos + Taxis in early 2024.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Faculty&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe is the founder and CEO of Cognotekt, GmBH, an AI company based in Cologne specialised in the design and implementation of holistic AI solutions. He has 20 years experience in the AI field, 8 years as a management consultant and software architect. He has also worked as a physician and mathematician, and he views AI itself -- to the extent that it is not an elaborate hype -- as a branch of applied mathematics. CUrrently his primary focus is in the biomathematics of cancer.&lt;br /&gt;
&lt;br /&gt;
Barry Smith is one of the world&#039;s most widely cited philosophers. He has contributed primarily to the field of applied ontology, which means applying philosophical ideas derived from analytical metaphysics to the concrete practical problems which arise where attempts are made to compare or combine heterogeneous bodies of data.&lt;br /&gt;
&lt;br /&gt;
Grading&lt;br /&gt;
&lt;br /&gt;
Essay with presentation: 70%&lt;br /&gt;
Essay with no presentation: 85%&lt;br /&gt;
Presentation: 15%&lt;br /&gt;
Class Participation 15%&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Draft Schedule&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:Venue: Room A23&lt;br /&gt;
&lt;br /&gt;
:1	Monday, May 4 (13:30-16:15) Introduction&lt;br /&gt;
:2	Tuesday May 5 (09:30-12:15) Limits and Dangers of AI?&lt;br /&gt;
:3	Wednesday, May 6 (09:30 - 12:15) Transhumanism and digital immortality&lt;br /&gt;
:4	Thursday, May 7 20 (09:30 - 12:15) Can a machine be conscious?&lt;br /&gt;
:5	Friday, May 8 (13:30 - 16:15) Are we living in a simulation&lt;br /&gt;
:6	Tuesday, May 12 (09:30 - 12:15) An introduction to the statistical foundations of AI&lt;br /&gt;
:7	Wednesday May 13 (09:30 - 12:15) Explicit, implicit, practical, personal and tacit knowledge&lt;br /&gt;
:8	Friday May 15 (09:30-12:15) Are We Living in a Simulation?&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Program&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==1. Monday, May 4 (13:30-16:15) Introduction&lt;br /&gt;
Part 1: An introduction to the course: Impact of philosophy on AI; impact of AI on philosophy&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
Video&lt;br /&gt;
Why Machines Will Never Rule the World&lt;br /&gt;
Part 2: What are the essential marks of human intelligence?&lt;br /&gt;
&lt;br /&gt;
The classical psychological definitions of intelligence are:  &lt;br /&gt;
&lt;br /&gt;
A. the ability to adapt to new situations (applies both to humans and to animals) &lt;br /&gt;
B. a very general mental capability (possessed only by humans) that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience &lt;br /&gt;
Can a machine be intelligent in either of these senses?&lt;br /&gt;
&lt;br /&gt;
Slides on IQ tests&lt;br /&gt;
Readings:&lt;br /&gt;
&lt;br /&gt;
Linda S. Gottfredson. Mainstream Science on Intelligence. In: Intelligence 24 (1997), pp. 13–23.&lt;br /&gt;
Jobst Landgrebe and Barry Smith: There is no Artificial General Intelligence&lt;br /&gt;
Jobst Landgrebe: Deep reasoning, abstraction and planning&lt;br /&gt;
Background: Ersatz Definitions, Anthropomorphisms, and Pareidolia&lt;br /&gt;
&lt;br /&gt;
There&#039;s no &#039;I&#039; in &#039;AI&#039;, Steven Pemberton, Amsterdam, December 12, 2024&lt;br /&gt;
1. Esatz definitions: using words like &#039;thinks&#039; as in &#039;the machine is thinking&#039;, but with meanings quite different from those we use when talking about human beings. As when we define &#039;flying&#039; as moving through the air, and then jumping up and down and saying &amp;quot;look, I&#039;m flying!&amp;quot;&lt;br /&gt;
2. Pareidolia: a psychological phenomenon that causes people to see patterns, objects, or meaning in ambiguous or unrelated stimuli&lt;br /&gt;
3. If you can&#039;t spot irony, you&#039;re not intelligent&lt;br /&gt;
Tuesday February 18 (09:30-12:15) Limits and Dangers of AI?&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the technical fundamentals of AI: Methods, mathematics, usage as well&lt;br /&gt;
&lt;br /&gt;
2. Outlines the theory of complex systems documented in our book&lt;br /&gt;
&lt;br /&gt;
3. Shows why AI cannot model complex systems adequately and synoptically, and why they therefore cannot reach a level of intelligence equal to that of human beings.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Will AI Destroy Humanity? A Soho Forum Debate (Spoiler: Jobst won)&lt;br /&gt;
&lt;br /&gt;
R.V. Yampolskii, AI: Unexplainable, Unpredictable, Uncontrollable&lt;br /&gt;
&lt;br /&gt;
Arvind Narayanan and Sayash Kapoor, AI Snake Oil&lt;br /&gt;
&lt;br /&gt;
Arnold Schelsky, The Hype Book, especially Chapter 1.&lt;br /&gt;
&lt;br /&gt;
==2	Tuesday May 5 (09:30-12:15) Limits and Dangers of AI? Transhumanism and digital immortality&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the full spectrum of transhumanism and its cultural origins.&lt;br /&gt;
&lt;br /&gt;
2. Debunk the feasibility of radically improving human beings via technology.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
TESCREALISM, or: why AI gods are so passionate about creating Artificial General Intelligence&lt;br /&gt;
Considering the existential risk of Artificial Superintelligence&lt;br /&gt;
Thursday, February 20 (9:30 - 12:15) Can a machine be conscious?&lt;br /&gt;
The machine will&lt;br /&gt;
&lt;br /&gt;
Computers cannot have a will, because computers don&#039;t give a damn. Therefore there can be no machine ethics&lt;br /&gt;
&lt;br /&gt;
The lack of the giving-a-damn-factor is taken by Yann LeCun as a reason to reject the idea that AI might pose an existential risk to humanity – an AI will have no desire for self-preservation “Almost half of CEOs fear A.I. could destroy humanity five to 10 years from now — but ‘A.I. godfather&#039; says an existential threat is ‘preposterously ridiculous’” Fortune, June 15, 2023. See also here.&lt;br /&gt;
Implications of the absence of a machine will:&lt;br /&gt;
&lt;br /&gt;
The problem of the singularity (when machines will take over from humans) will not arise&lt;br /&gt;
The idea of digital immortality will never be realized Slides&lt;br /&gt;
The idea that human beings are simulations can be rejected&lt;br /&gt;
There can be no AI ethics (only: ethics governing human beings when they use AI)&lt;br /&gt;
Fermi&#039;s paradox is solved&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Searle&#039;s Chinese Room Argument&lt;br /&gt;
Machines cannot have intentionality; they cannot have experiences which are about something.&lt;br /&gt;
&lt;br /&gt;
Searle: Minds, Brains, and Programs&lt;br /&gt;
Monday, April 28 (14:30 - 17:30) Are we living in a simulation&lt;br /&gt;
Are we living in a simulation?&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
The Fermi Paradox&lt;br /&gt;
&lt;br /&gt;
Bostrom&#039;s Simulation Argument&lt;br /&gt;
&lt;br /&gt;
Background&lt;br /&gt;
&lt;br /&gt;
David Chalmers, Reality+&lt;br /&gt;
&lt;br /&gt;
Dialog with Chalmers avatar&lt;br /&gt;
&lt;br /&gt;
Tuesday, April 29 (13:30 - 16:30) An introduction to the statistical foundations of AI&lt;br /&gt;
An introduction to the statistical foundations of AI&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
The types of AI&lt;br /&gt;
&lt;br /&gt;
Deterministic AI&lt;br /&gt;
Good old fashioned AI (GOFAI)&lt;br /&gt;
Basic stochastic AI&lt;br /&gt;
How regression works&lt;br /&gt;
Advanced stochastic AI&lt;br /&gt;
Neural networks and deep learning&lt;br /&gt;
Hybrid&lt;br /&gt;
Neurosymbolic AI&lt;br /&gt;
Background&lt;br /&gt;
Why machines will never rule the world, chapter 7 (chapter 8 of 2nd edition)&lt;br /&gt;
&lt;br /&gt;
Wednesday April 30 (13:30 - 16:30) Explicit, implicit, practical, personal and tacit knowledge&lt;br /&gt;
Personal knowledge&lt;br /&gt;
&lt;br /&gt;
Explicit, implicit, practical, personal and tacit knowledge&lt;br /&gt;
Video&lt;br /&gt;
Knowing how vs Knowing that&lt;br /&gt;
Personal knowledge and science&lt;br /&gt;
Creativity&lt;br /&gt;
Empathy&lt;br /&gt;
Entrepreneurship&lt;br /&gt;
Leadership and control (and ruling the world)&lt;br /&gt;
Complex Systems and Cognitive Science: Why the Replication Problem is here to stay&lt;br /&gt;
&lt;br /&gt;
The &#039;replication problem&#039; is the the inability of scientific communities to independently confirm the results of scientific work. Much has been written on this problem especially as it arises in (social) psychology, and on potential solutions under the heading of &#039;open science&#039;. But we will see that the replication problem has plagued medicine as a positive science since its beginnings (Virchov and Pasteur). This problem has become worse over the last 30 years and has massive consequences for healthcare practice and policy.&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
Friday May 2 (13:30-16:30) Are We Living in a Simulation?&lt;br /&gt;
Are we living in a simulation?, Slides&lt;br /&gt;
Video&lt;br /&gt;
The Future of Artificial Intelligence, Slides&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Background Material&lt;br /&gt;
An Introduction to AI for Philosophers&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
Slides&lt;br /&gt;
(AI experts are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
An Introduction to Philosophy for Computer Scientists&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
Slides&lt;br /&gt;
(Philosophers are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
John McCarthy, &amp;quot;What has AI in common with philosophy?&amp;quot;&lt;/div&gt;</summary>
		<author><name>Phismith</name></author>
	</entry>
	<entry>
		<id>https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75514</id>
		<title>Philosophy and Artificial Intelligence 2026</title>
		<link rel="alternate" type="text/html" href="https://ncorwiki.buffalo.edu/index.php?title=Philosophy_and_Artificial_Intelligence_2026&amp;diff=75514"/>
		<updated>2026-01-20T13:40:54Z</updated>

		<summary type="html">&lt;p&gt;Phismith: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&#039;&#039;&#039;Philosophy and Artificial Intelligence 2026&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe and Barry Smith&lt;br /&gt;
&lt;br /&gt;
MAP, USI, Lugano, Spring 2026&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;FIRST DRAFT VERSION&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Introduction&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Artificial Intelligence (AI) is the subfield of Computer Science devoted to developing programs that enable computers to display behavior that can (broadly) be characterized as intelligent. On the strong version, the ultimate goal of AI is to create what is called General Artificial Intelligence (AGI), by which is meant an artificial system that is as intelligent as a human being.&lt;br /&gt;
&lt;br /&gt;
Since its inception in the middle of the last century AI has enjoyed repeated cycles of enthusiasm and disappointment (AI summers and winters). Recent successes of ChatGPT and other Large Language Models (LLMs) have opened a new era of popularization of AI. For the first time, AI tools have been created which are immediately available to the wider population, who for the first time can have real hands-on experience of what AI can do.&lt;br /&gt;
&lt;br /&gt;
These developments in AI open up a series of questions such as:&lt;br /&gt;
&lt;br /&gt;
:Will the powers of AI continue to grow in the future, and if so will they ever reach the point where they can be said to have intelligence equivalent to or greater than that of a human being?&lt;br /&gt;
:Could we ever reach the point where we can accept the thesis that an AI system could have something like consciousness or sentience?&lt;br /&gt;
:Could we reach the point where an AI system could be said to behave ethically, or to have responsibility for its actions?&lt;br /&gt;
:Can quantum computers enable a stronger AI than what we have today?&lt;br /&gt;
:Can a computer have desires, a will, and emotions?&lt;br /&gt;
:Can a computer have responsibility for its behavior?&lt;br /&gt;
:Could a machine have something like a personal identity? Would I really survive if the contents of my brain were uploaded to the cloud?&lt;br /&gt;
&lt;br /&gt;
We will describe in detail how stochastic AI works, and consider these and a series of other questions at the borderlines of philosophy and AI. The class will close with presentations of papers on relevant topics given by students.&lt;br /&gt;
&lt;br /&gt;
Some of the material for this class is derived from our book&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Why Machines Will Never Rule the World: Artificial Intelligence without Fear&#039;&#039; (2nd edition, Routledge 2025).&lt;br /&gt;
and from the companion volume:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Symposium on Why Machines Will Never Rule the World&#039;&#039; — Guest editor, Janna Hastings, University of Zurich&lt;br /&gt;
which appeared as a special issue of the public access journal Cosmos + Taxis in early 2024.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Faculty&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Jobst Landgrebe is the founder and CEO of Cognotekt, GmBH, an AI company based in Cologne specialised in the design and implementation of holistic AI solutions. He has 20 years experience in the AI field, 8 years as a management consultant and software architect. He has also worked as a physician and mathematician, and he views AI itself -- to the extent that it is not an elaborate hype -- as a branch of applied mathematics. CUrrently his primary focus is in the biomathematics of cancer.&lt;br /&gt;
&lt;br /&gt;
Barry Smith is one of the world&#039;s most widely cited philosophers. He has contributed primarily to the field of applied ontology, which means applying philosophical ideas derived from analytical metaphysics to the concrete practical problems which arise where attempts are made to compare or combine heterogeneous bodies of data.&lt;br /&gt;
&lt;br /&gt;
Grading&lt;br /&gt;
&lt;br /&gt;
Essay with presentation: 70%&lt;br /&gt;
Essay with no presentation: 85%&lt;br /&gt;
Presentation: 15%&lt;br /&gt;
Class Participation 15%&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Draft Schedule&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:1	Monday, February 4 (14:30-17:15) Introduction&lt;br /&gt;
:2	Tuesday February 5 (09:30-12:15) Limits and Dangers of AI?&lt;br /&gt;
:3	Wednesday, February 6 (13:30 - 16:15) Transhumanism and digital immortality&lt;br /&gt;
:4	Thursday, February 20 (9:30 - 12:15) Can a machine be conscious?&lt;br /&gt;
:5	Monday, April 28 (14:30 - 17:30) Are we living in a simulation&lt;br /&gt;
:6	Tuesday, April 29 (13:30 - 16:30) An introduction to the statistical foundations of AI&lt;br /&gt;
:7	Wednesday April 30 (13:30 - 16:30) Explicit, implicit, practical, personal and tacit knowledge&lt;br /&gt;
:8	Friday May 2 (13:30-16:30) Are We Living in a Simulation?&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Background Material&lt;br /&gt;
Monday, February 17 (14:30-17:15) Introduction&lt;br /&gt;
Part 1: An introduction to the course: Impact of philosophy on AI; impact of AI on philosophy&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
Video&lt;br /&gt;
Why Machines Will Never Rule the World&lt;br /&gt;
Part 2: What are the essential marks of human intelligence?&lt;br /&gt;
&lt;br /&gt;
The classical psychological definitions of intelligence are:  &lt;br /&gt;
&lt;br /&gt;
A. the ability to adapt to new situations (applies both to humans and to animals) &lt;br /&gt;
B. a very general mental capability (possessed only by humans) that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience &lt;br /&gt;
Can a machine be intelligent in either of these senses?&lt;br /&gt;
&lt;br /&gt;
Slides on IQ tests&lt;br /&gt;
Readings:&lt;br /&gt;
&lt;br /&gt;
Linda S. Gottfredson. Mainstream Science on Intelligence. In: Intelligence 24 (1997), pp. 13–23.&lt;br /&gt;
Jobst Landgrebe and Barry Smith: There is no Artificial General Intelligence&lt;br /&gt;
Jobst Landgrebe: Deep reasoning, abstraction and planning&lt;br /&gt;
Background: Ersatz Definitions, Anthropomorphisms, and Pareidolia&lt;br /&gt;
&lt;br /&gt;
There&#039;s no &#039;I&#039; in &#039;AI&#039;, Steven Pemberton, Amsterdam, December 12, 2024&lt;br /&gt;
1. Esatz definitions: using words like &#039;thinks&#039; as in &#039;the machine is thinking&#039;, but with meanings quite different from those we use when talking about human beings. As when we define &#039;flying&#039; as moving through the air, and then jumping up and down and saying &amp;quot;look, I&#039;m flying!&amp;quot;&lt;br /&gt;
2. Pareidolia: a psychological phenomenon that causes people to see patterns, objects, or meaning in ambiguous or unrelated stimuli&lt;br /&gt;
3. If you can&#039;t spot irony, you&#039;re not intelligent&lt;br /&gt;
Tuesday February 18 (09:30-12:15) Limits and Dangers of AI?&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the technical fundamentals of AI: Methods, mathematics, usage as well&lt;br /&gt;
&lt;br /&gt;
2. Outlines the theory of complex systems documented in our book&lt;br /&gt;
&lt;br /&gt;
3. Shows why AI cannot model complex systems adequately and synoptically, and why they therefore cannot reach a level of intelligence equal to that of human beings.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Will AI Destroy Humanity? A Soho Forum Debate (Spoiler: Jobst won)&lt;br /&gt;
&lt;br /&gt;
R.V. Yampolskii, AI: Unexplainable, Unpredictable, Uncontrollable&lt;br /&gt;
&lt;br /&gt;
Arvind Narayanan and Sayash Kapoor, AI Snake Oil&lt;br /&gt;
&lt;br /&gt;
Arnold Schelsky, The Hype Book, especially Chapter 1.&lt;br /&gt;
&lt;br /&gt;
Wednesday, February 19 (13:30 - 16:15) Transhumanism and digital immortality&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
1. Surveys the full spectrum of transhumanism and its cultural origins.&lt;br /&gt;
&lt;br /&gt;
2. Debunk the feasibility of radically improving human beings via technology.&lt;br /&gt;
&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
TESCREALISM, or: why AI gods are so passionate about creating Artificial General Intelligence&lt;br /&gt;
Considering the existential risk of Artificial Superintelligence&lt;br /&gt;
Thursday, February 20 (9:30 - 12:15) Can a machine be conscious?&lt;br /&gt;
The machine will&lt;br /&gt;
&lt;br /&gt;
Computers cannot have a will, because computers don&#039;t give a damn. Therefore there can be no machine ethics&lt;br /&gt;
&lt;br /&gt;
The lack of the giving-a-damn-factor is taken by Yann LeCun as a reason to reject the idea that AI might pose an existential risk to humanity – an AI will have no desire for self-preservation “Almost half of CEOs fear A.I. could destroy humanity five to 10 years from now — but ‘A.I. godfather&#039; says an existential threat is ‘preposterously ridiculous’” Fortune, June 15, 2023. See also here.&lt;br /&gt;
Implications of the absence of a machine will:&lt;br /&gt;
&lt;br /&gt;
The problem of the singularity (when machines will take over from humans) will not arise&lt;br /&gt;
The idea of digital immortality will never be realized Slides&lt;br /&gt;
The idea that human beings are simulations can be rejected&lt;br /&gt;
There can be no AI ethics (only: ethics governing human beings when they use AI)&lt;br /&gt;
Fermi&#039;s paradox is solved&lt;br /&gt;
Background:&lt;br /&gt;
&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
Searle&#039;s Chinese Room Argument&lt;br /&gt;
Machines cannot have intentionality; they cannot have experiences which are about something.&lt;br /&gt;
&lt;br /&gt;
Searle: Minds, Brains, and Programs&lt;br /&gt;
Monday, April 28 (14:30 - 17:30) Are we living in a simulation&lt;br /&gt;
Are we living in a simulation?&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
The Fermi Paradox&lt;br /&gt;
&lt;br /&gt;
Bostrom&#039;s Simulation Argument&lt;br /&gt;
&lt;br /&gt;
Background&lt;br /&gt;
&lt;br /&gt;
David Chalmers, Reality+&lt;br /&gt;
&lt;br /&gt;
Dialog with Chalmers avatar&lt;br /&gt;
&lt;br /&gt;
Tuesday, April 29 (13:30 - 16:30) An introduction to the statistical foundations of AI&lt;br /&gt;
An introduction to the statistical foundations of AI&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
&lt;br /&gt;
The types of AI&lt;br /&gt;
&lt;br /&gt;
Deterministic AI&lt;br /&gt;
Good old fashioned AI (GOFAI)&lt;br /&gt;
Basic stochastic AI&lt;br /&gt;
How regression works&lt;br /&gt;
Advanced stochastic AI&lt;br /&gt;
Neural networks and deep learning&lt;br /&gt;
Hybrid&lt;br /&gt;
Neurosymbolic AI&lt;br /&gt;
Background&lt;br /&gt;
Why machines will never rule the world, chapter 7 (chapter 8 of 2nd edition)&lt;br /&gt;
&lt;br /&gt;
Wednesday April 30 (13:30 - 16:30) Explicit, implicit, practical, personal and tacit knowledge&lt;br /&gt;
Personal knowledge&lt;br /&gt;
&lt;br /&gt;
Explicit, implicit, practical, personal and tacit knowledge&lt;br /&gt;
Video&lt;br /&gt;
Knowing how vs Knowing that&lt;br /&gt;
Personal knowledge and science&lt;br /&gt;
Creativity&lt;br /&gt;
Empathy&lt;br /&gt;
Entrepreneurship&lt;br /&gt;
Leadership and control (and ruling the world)&lt;br /&gt;
Complex Systems and Cognitive Science: Why the Replication Problem is here to stay&lt;br /&gt;
&lt;br /&gt;
The &#039;replication problem&#039; is the the inability of scientific communities to independently confirm the results of scientific work. Much has been written on this problem especially as it arises in (social) psychology, and on potential solutions under the heading of &#039;open science&#039;. But we will see that the replication problem has plagued medicine as a positive science since its beginnings (Virchov and Pasteur). This problem has become worse over the last 30 years and has massive consequences for healthcare practice and policy.&lt;br /&gt;
Slides&lt;br /&gt;
&lt;br /&gt;
Friday May 2 (13:30-16:30) Are We Living in a Simulation?&lt;br /&gt;
Are we living in a simulation?, Slides&lt;br /&gt;
Video&lt;br /&gt;
The Future of Artificial Intelligence, Slides&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Background Material&lt;br /&gt;
An Introduction to AI for Philosophers&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
Slides&lt;br /&gt;
(AI experts are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
An Introduction to Philosophy for Computer Scientists&lt;br /&gt;
&lt;br /&gt;
Video&lt;br /&gt;
Slides&lt;br /&gt;
(Philosophers are invited to criticize what I have to say in this talk)&lt;br /&gt;
&lt;br /&gt;
John McCarthy, &amp;quot;What has AI in common with philosophy?&amp;quot;&lt;/div&gt;</summary>
		<author><name>Phismith</name></author>
	</entry>
	<entry>
		<id>https://ncorwiki.buffalo.edu/index.php?title=Why_Machines&amp;diff=75513</id>
		<title>Why Machines</title>
		<link rel="alternate" type="text/html" href="https://ncorwiki.buffalo.edu/index.php?title=Why_Machines&amp;diff=75513"/>
		<updated>2026-01-20T13:04:17Z</updated>

		<summary type="html">&lt;p&gt;Phismith: /* Podcasts */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==&#039;&#039;&#039;Video Playlists&#039;&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
[https://www.youtube.com/watch?v=PnG_pmMDXNA&amp;amp;list=PLyngZgIl3WThR6bptbvVAC0Cxn_mUCgCq&amp;amp;pp=gAQBiAQB ChatGPT]&lt;br /&gt;
&lt;br /&gt;
[https://www.youtube.com/playlist?list=PL-PSlrVaK5Iy5-VeJeHvNzc3it_0cIkT4 Why Machines Will Never Rule the World]&lt;br /&gt;
&lt;br /&gt;
[https://www.youtube.com/watch?v=T9Fm7JtLtAs&amp;amp;list=PLyngZgIl3WTjN_4v42_1sLUz-GsiNwWST Artificial Intelligence]&lt;br /&gt;
&lt;br /&gt;
[https://www.youtube.com/watch?v=bJ8tcposTek&amp;amp;list=PL-PSlrVaK5Iwe5CK06KCp1ZiCHNJcmJtN&amp;amp;pp=iAQB Videos auf Deutsch]&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;&#039;Podcasts&#039;&#039;&#039;== &lt;br /&gt;
&lt;br /&gt;
[https://www.youtube.com/watch?v=Zle7pJIIfFc APA Book Spotlight on Why Machines Will Never Rule the World], Interview with Charlie Taben, American Philosophical Association Blog, September 23, 2022&lt;br /&gt;
&lt;br /&gt;
[https://www.digitaltrends.com/computing/why-ai-will-never-rule-the-world/ Why AI Will Never Rule the World], Interview with Luke Dormehl, Digital Trends, September 25, 2022&lt;br /&gt;
&lt;br /&gt;
[https://www.youtube.com/watch?v=f7I6mtFkrOM Machines Will Never Rule Us!], LocoFoco, September 27, 2022&lt;br /&gt;
&lt;br /&gt;
[https://wirkman.com/2022/09/27/ai-is-here/ “AI is here, but will it rule us?, Wirkman Comments], podcast with David Ramsey Steele (September 27, 2022)&lt;br /&gt;
&lt;br /&gt;
[https://www.youtube.com/watch?v=IMnWAuoucjo Interview with Walid Saba on Why Machines Will Never Rule the World], Machine Learning Street Talk, December 15, 2022 (review starts half way through)&lt;br /&gt;
&lt;br /&gt;
[https://www.nas.org/blogs/media/video-will-machines-rule-the-world Will Machines Rule the World? Interview with J. Scott Turner], National Association of Scholars Restoring the Sciences Webinar Series, October 4, 2022 Youtube&lt;br /&gt;
&lt;br /&gt;
[https://www.youtube.com/watch?v=ZB03aUACCk8 Why AI will never rule the world] Luke Dormehl, Digital Trends (September 2022), [https://philpapers.org/archive/DORWAW-2.pdf transcript]&lt;br /&gt;
&lt;br /&gt;
[https://www.cspicenter.com/p/why-the-singularity-might-never-come Why Machines Will Never Rule the World – On AI and Faith], Conversation between Jobst Landgrebe, Barry Smith and Rev. Jamie Franklin, Irreverend, November 30, 2022&lt;br /&gt;
&lt;br /&gt;
[https://x.com/Achgut_com/status/1781995642652418111 Interview with Jobst Landgrebe], ChatGPT, Kontrafunk Matussek, January 13, 2023 &lt;br /&gt;
&lt;br /&gt;
[https://www.cspicenter.com/p/why-the-singularity-might-never-come#details Why the Singularity Might Never Come]. Interview with Richard Hanania, Center for the Study of Partisanship and Ideology, January 30, 2023. Twitter link&lt;br /&gt;
&lt;br /&gt;
[https://www.youtube.com/watch?v=Y-yovYmd1_c Where there’s no will there’s no way], Interview with Alex Thomson, UKCommons, April 2023&lt;br /&gt;
&lt;br /&gt;
[https://www.youtube.com/watch?v=vO_JDTsrdiA Why AI Won’t Rule the World], Conversation with Jobst Landgrebe and Barry Smith, The Pangburn Hangout, May 5, 2023&lt;br /&gt;
&lt;br /&gt;
[https://studio.youtube.com/channel/UCxZZ9OmeEv83HFRHPvf3wzw/content/playlists Interview with Jobst Landgrebe], Reality Check Radio, May 23, 2023&lt;br /&gt;
&lt;br /&gt;
[https://youtu.be/0soE84dS_M8 K.O. durch K.I.?], Gerd Buurmann Interview with Jobst Landgrebe, May 2024.&lt;br /&gt;
&lt;br /&gt;
[https://www.youtube.com/watch?v=3Ni3NiA29Pw AI and ChatGPT: Should we be worried?], Steven Peterson, Jobst Landgrebe and Barry Smith, National Association of Scholars, May 19, 2023&lt;br /&gt;
&lt;br /&gt;
[https://dataskeptic.com/blog/episodes/2023/why-machines-will-never-rule-the-world Why Machines Will Never Rule the World], Interview with Kyle Polich, Data Sceptic, May 29, 2023&lt;br /&gt;
&lt;br /&gt;
[https://provoke.fm/episode-241-the-bankers-bookshelf-why-machines-will-never-rule-the-world-artificial-intelligence-without-fear/ Why Machines Will Never Rule the World], Bankers Bookshelf, August 12, 2024&lt;br /&gt;
&lt;br /&gt;
[https://www.youtube.com/watch?v=xrlT1LQSyNU Skynet Will Not Become Self-Aware, AGI Is Impossible!], Geopolitics &amp;amp; Empire, April 17, 2024&lt;br /&gt;
&lt;br /&gt;
[https://philpapers.org/rec/SOLLAN L’intelligenza artificiale non dominerà il mondo]”, interview with Pierangelo Soldavini, Il sole de 24 ore, April 27, 2024. With English translation&lt;br /&gt;
&lt;br /&gt;
[https://podcasts.apple.com/de/podcast/76-why-ai-will-never-become-conscious-with-jobst-landgrebe/id1679217838?i=1000666922982 Why AI Will Never Become Conscious], Parallel Mike Podcast, August 28, 2024&lt;br /&gt;
&lt;br /&gt;
[https://podcasts.apple.com/de/podcast/geopolitics-empire/id1003465597?i=1000672788791 The Trend Toward Repressive Rule, To What Extent Can It Work?], Geopolitics and Empire, October 12, 2024&lt;br /&gt;
&lt;br /&gt;
[https://podcasts.apple.com/us/podcast/jobst-landgrebe-can-ai-take-over-the-world-ep-100/id1675930017?i=1000684413612 Can AI Take Over The World?], The Two Stewards Show, 100th Episode, January 17, 2025&lt;br /&gt;
&lt;br /&gt;
[https://podcasts.apple.com/de/podcast/jerm-warfare/id1475255493?i=1000695058661 Why artificial intelligence will never take over], Jerm Warfare, Jeremy AI Interview, February 22, 2025&lt;br /&gt;
&lt;br /&gt;
[https://youtu.be/Aig-A64I-vM The intersection of ontology and technology], Haman Nature, January 19, 2026&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;&#039;Reviews&#039;&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
Bert Gambini, “[https://www.futurity.org/artificial-intelligence-ai-2789642-2/ Book: AI is cool, but nowhere near human capacity], Futurity, Aug. 12, 2022&lt;br /&gt;
&lt;br /&gt;
Ken Archer, “[https://twitter.com/kenarchersf/status/1570084869233082368 Why Machines Will Never Rule the World]”, September 14, 2022&lt;br /&gt;
&lt;br /&gt;
“[https://mindmatters.ai/2022/10/computer-takeover-wont-happen-say-a-scientist-and-philosopher Computer takeover won’t happen, say a scientist and philosopher]”, Mind Matters, October 4, 2022&lt;br /&gt;
&lt;br /&gt;
Walid S. Saba, “[https://philpapers.org/archive/SABMWN.pdf Why Machines Will Never Rule the World]”, Journal of Knowledge Structures &amp;amp; Systems, 3 (4), Oct./Dec. 2022, 38-41&lt;br /&gt;
&lt;br /&gt;
Aditya Aswani, “[https://www.lesswrong.com/posts/Kg6MeakJ8pBWpoHHm/why-i-should-work-on-ai-safety-part-2-will-ai-actually Will AI Actually Surpass Human Intelligence?]”, Less Wrong, December 29, 2023&lt;br /&gt;
&lt;br /&gt;
Fabian Nicolay, “[https://www.achgut.com/artikel/der_mythos_von_der_kuenstlichen_intelligenz Der Mythos von der Künstlichen Intelligenz]”, Achgut, January 14, 2023&lt;br /&gt;
&lt;br /&gt;
Hawley, Jeff (2022). [https://philosophynews.com/whip-the-philosophers-the-robots-are-coming/ The robots are coming: What’s happening in philosophy], Philosophynews.com, August 22, 2022 &lt;br /&gt;
&lt;br /&gt;
Ken Archer, [https://philosophynews.com/whip-the-philosophers-the-robots-are-coming/ Philosophy News Post], September 24, 2022&lt;br /&gt;
[https://kontrafunk.radio/de/sendung-nachhoeren/talkshow/irrlichter-und-fixsterne/irrlichter-und-fixsterne-matussek-no-15 Irrlichter und Fixsterne], Matthias Matussek im Gespräch mit Jobst Landgrebe], Matussek Kontrafunk No 15, January 2023&lt;br /&gt;
&lt;br /&gt;
Matt Duckham, [https://ires.substack.com/p/why-machines-will-never-rule-the Why machines will never rule the world], The Hat Cupboard, March 19, 2023&lt;br /&gt;
&lt;br /&gt;
William J. Rapaport, “[https://cse.buffalo.edu/~rapaport/Papers/lands.pdf Is Artificial General Intelligence Impossible?”], May 2023&lt;br /&gt;
&lt;br /&gt;
[https://mindmatters.ai/2023/05/new-routledge-book-on-ai-it-wont-take-us-over/ AI won’t take us over], Mind Matters, May 2023&lt;br /&gt;
&lt;br /&gt;
Atle Ottesen Søvik, “[https://www.idunn.no/doi/full/10.18261/nft.58.2-3.7#con Kan maskiner få generell intelligens? En kritisk drøfting av Landgrebe og Smiths bok Why Machines Will Never Rule the World”], Norsk filosofisk tidsskrift, 58(2-3), 12 September 2023, 141–152.&lt;br /&gt;
&lt;br /&gt;
[https://astralcodexten.substack.com/p/your-book-review-why-machines-will Astral Codex Ten], Thomas Jefferson Snodgrass, Why Machines Will Never Rule the World, November 14, 2023, finalist in the [https://www.astralcodexten.com/p/your-book-review-why-machines-will Astral Codex Ten Book Review Contest] &lt;br /&gt;
&lt;br /&gt;
[https://www.reddit.com/r/slatestarcodex/comments/13yv2rn/your_book_review_why_machines_will_never_rule_the/ Reddit]. Your book review (Comments on Astral Codex X). August 2023&lt;br /&gt;
&lt;br /&gt;
Daniel Kelly, [https://www.buffalo.edu/catt/blog/catt-blog-101823.html The Supposed Looming Specter of Artificial General Intelligence], CATT Blog, October 18, 2023&lt;br /&gt;
&lt;br /&gt;
Peter Gärdenfors, [https://philpapers.org/rec/GRDWMW Why Machines Won&#039;t Take Over the World], March 2024, originally published in Swedish as [https://fritanke.se/sans/2024-nr-2/varfor-ai-inte-kommer-att-ta-over-varlden/ Varför AI inte kommer att ta över världen], Sans, 2024, 2&lt;br /&gt;
&lt;br /&gt;
Breaking Latest News, [https://www.breakinglatest.news/technology/artificial-intelligence-will-not-dominate-the-world/ Artificial Intelligence Will not Dominate the World], April 27, 2024&lt;br /&gt;
&lt;br /&gt;
Damian Szczęch, [https://czasopisma.ignatianum.edu.pl/rfi/article/view/3399/2749 Why Machines Will Never Rule The World], &#039;&#039;Rocznik Filozoficzny Ignatianum&#039;&#039; 30 (2):225-234, 2024&lt;br /&gt;
&lt;br /&gt;
Alberto Magnani, “The Limitations of AI”, Law and Liberty, May 20, 2024&lt;br /&gt;
&lt;br /&gt;
Atle Ottesen Søvik, “[https://lawliberty.org/book-review/the-limitations-of-ai/?mc_cid=d6acd2e43b&amp;amp;mc_eid=53e442ee12 Vil kunstig intelligens i nær fremtid føre til dommedag for enneskeheten?]”, Kirke og Kultur, 129 (1), 26-37, 25 March, 2024&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;&#039;Book Symposium&#039;&#039;&#039;  ==&lt;br /&gt;
&lt;br /&gt;
Symposium on &#039;&#039;Why Machines Will Never Rule the World&#039;&#039;, edited by Janna Hastings, [https://cosmosandtaxis.org/ct-1256/ Cosmos+Taxis, 12 (5+6)] &lt;br /&gt;
&lt;br /&gt;
[https://cosmosandtaxis.org/wp-content/uploads/2024/05/hastings_ct_vol12_iss5_6.pdf In Our Own Image: What the Quest for Artificial General Intelligence Can Teach Us About Being Human], Janna Hastings&lt;br /&gt;
&lt;br /&gt;
[https://cosmosandtaxis.org/wp-content/uploads/2024/05/rapaport_ct_vol12_iss5_6.pdf Is Artificial General Intelligence Impossible?], William J. Rapaport&lt;br /&gt;
&lt;br /&gt;
[https://cosmosandtaxis.org/wp-content/uploads/2024/05/simon_ct_vol12_iss5_6.pdf Is Intelligence Non-Computational Dynamical Coupling?], Jonathan A. Simon&lt;br /&gt;
&lt;br /&gt;
[https://cosmosandtaxis.org/wp-content/uploads/2024/05/schulz_hastings_ct_vol12_iss5_6.pdf What is a machine? Exploring the meaning of ‘artificial’ in ‘artificial intelligence’], Stefan Schulz and Janna Hastings&lt;br /&gt;
&lt;br /&gt;
[https://cosmosandtaxis.org/wp-content/uploads/2024/05/martinelli_ct_vol12_iss5_6.pdf Complexity and Particularity: An Argument for the Impossibility of Artificial Intelligence], Emanuele Martinelli&lt;br /&gt;
&lt;br /&gt;
[https://cosmosandtaxis.org/wp-content/uploads/2024/05/fjelland_ct_vol12_iss5_6.pdf Computers will not acquire general intelligence, but may still rule the world], Ragnar Fjelland&lt;br /&gt;
&lt;br /&gt;
[https://cosmosandtaxis.org/wp-content/uploads/2024/05/west_ct_vol12_iss5_6.pdf Semi-Autonomous Godlike Artificial Intelligence (SAGAI) is conceivable but how far will it resemble Kali or Thor?], Robert West&lt;br /&gt;
&lt;br /&gt;
[https://cosmosandtaxis.org/wp-content/uploads/2024/05/krinkin_ct_vol12_iss5_6.pdf Back to Evolutionary Intelligence: Reading Landgrebe and Smith], Kirill Krinkin&lt;br /&gt;
&lt;br /&gt;
[https://cosmosandtaxis.org/wp-content/uploads/2024/05/sedlakova_ct_vol12_iss5_6.pdf Conversational AI for Psychotherapy and Its Role in the Space of Reason], Jana Sedlakova&lt;br /&gt;
&lt;br /&gt;
[https://cosmosandtaxis.org/wp-content/uploads/2024/05/hedblom_ct_vol12_iss5_6.pdf  Every dog has its day: An in-depth analysis of the creative ability of visual generative AI], Maria M. Hedblom&lt;br /&gt;
&lt;br /&gt;
[https://cosmosandtaxis.org/wp-content/uploads/2024/05/landgrebe_smith_ct_vol12_iss5_6.pdf  Intelligence. And what computers still can’t do], Jobst Landgrebe and Barry Smith&lt;/div&gt;</summary>
		<author><name>Phismith</name></author>
	</entry>
	<entry>
		<id>https://ncorwiki.buffalo.edu/index.php?title=Introduction_to_Philosophy_from_an_Ontological_Perspective&amp;diff=75512</id>
		<title>Introduction to Philosophy from an Ontological Perspective</title>
		<link rel="alternate" type="text/html" href="https://ncorwiki.buffalo.edu/index.php?title=Introduction_to_Philosophy_from_an_Ontological_Perspective&amp;diff=75512"/>
		<updated>2026-01-10T13:16:57Z</updated>

		<summary type="html">&lt;p&gt;Phismith: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;PHI 601 Introduction to Philosophy from an Ontological Perspective - Spring 2026&lt;br /&gt;
&lt;br /&gt;
 Registration number 23606&lt;br /&gt;
&lt;br /&gt;
Instructor: [http://ontology.buffalo.edu/smith Barry Smith] &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Office hours&#039;&#039;&#039;: By appointment via email to [mailto:phismith@buffalo.edu]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The Course&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This course is a 1 credit-hour asynchronous online course for masters-level students and advanced undergraduates. No background in philosophy or ontology is presupposed.&lt;br /&gt;
&lt;br /&gt;
It provides an introduction to central themes in the history of philosophy viewed from an ontological perspective. The course is designed to be of interest to both philosophers and those with a background in computer and information science. Topics treated will include: &lt;br /&gt;
&lt;br /&gt;
- brief history of ontology from Aristotle to the Human Genome Project. &lt;br /&gt;
&lt;br /&gt;
- the meaning of life&lt;br /&gt;
&lt;br /&gt;
- the ontology of social reality&lt;br /&gt;
&lt;br /&gt;
- ontology leaving the mother ship of philosophy&lt;br /&gt;
&lt;br /&gt;
- why computer science needs philosophy&lt;br /&gt;
&lt;br /&gt;
- the Semantic Web&lt;br /&gt;
&lt;br /&gt;
- towards a standard top-level ontology&lt;br /&gt;
&lt;br /&gt;
- ontology and the Federal Government Data Integration Initiative (anno 2009)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[https://www.youtube.com/playlist?list=PLyngZgIl3WTjov-UhEW7N145LVBPrRYLZ Course content can be found here].&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;&#039;Student Learning Outcomes&#039;&#039;&#039; ==&lt;br /&gt;
          &lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Program Outcomes/Competencies  &lt;br /&gt;
! Instructional Method(s)&lt;br /&gt;
! Assessment Method(s)&lt;br /&gt;
|-&lt;br /&gt;
| The student will acquire a beginner&#039;s knowledge of philosophy that will introduce him or her to more technical aspects of the discipline in subsequent semesters.  &lt;br /&gt;
| Lectures and class discussions.&lt;br /&gt;
| Assessment of questions submitted at the end of the lectures.&lt;br /&gt;
|-&lt;br /&gt;
| The student will acquire experience in using one principal philosophical method, namely: analysing arguments and formulating relevant questions for which answers have not been provided in the course of the lectures.&lt;br /&gt;
| The student is required to submit written questions for each of the 8 lectures making up the course.&lt;br /&gt;
| Review of questions for relevance and originality.&lt;br /&gt;
|-&lt;br /&gt;
| The student will acquire experience in using a second principal philosophical method, namely: taking an active part in oral arguments.&lt;br /&gt;
| Performance in the final -- synchronous -- session of the course, active participation in which is compulsory.&lt;br /&gt;
| Review of active participation by student.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Grading&#039;&#039;&#039;&lt;br /&gt;
Grading for the course will take the following form. For each of the 8 lectures the student is required to prepare a single question relating to the content of that lecture. It should be such that an answer to the question is not provided in the lecture. It should also be of general interest to the other students taking the course. After digesting the content of all lectures the student should send a list of 8 questions to phismith@buffalo.edu with the subject heading &amp;quot;8 Questions&amp;quot;. After receiving emails of this form from all students enrolled in the class, &#039;&#039;&#039;and not later than November 15&#039;&#039;&#039;, a zoom meeting will be organized at which students will be required to engage in arguments pertaining to a subset of these questions. Participation in this zoom meeting is required by all class participants, and class contributions will be part of the grade for the course. &lt;br /&gt;
&lt;br /&gt;
The final grade will be calculated on the basis of: &lt;br /&gt;
:1. quality of questions, measured in terms of interestingness, clarity, and relevance to the course&lt;br /&gt;
:2. completeness of the list of questions received&lt;br /&gt;
:3. quality of contributions to the final synchronous meeting&lt;br /&gt;
&lt;br /&gt;
A video recording of Dr Smith&#039;s answers to representative questions is available [https://buffalo.box.com/v/Questions-on-ontology here].&lt;br /&gt;
&lt;br /&gt;
For further information please contact Dr Smith at phismith@buffalo.edu&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Grade Quality Percentage&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| class = wikitable&lt;br /&gt;
|  A	|| 4.0	|| 90.0% -100.00%&lt;br /&gt;
|-&lt;br /&gt;
| A-	|| 3.67	|| 87.0% - 89.9%&lt;br /&gt;
|-&lt;br /&gt;
| B+	|| 3.33	|| 84.0% - 86.9%&lt;br /&gt;
|-&lt;br /&gt;
| B	|| 3.00	|| 80.0% - 83.9%&lt;br /&gt;
|-&lt;br /&gt;
| B-	|| 2.67	|| 77.0% - 79.9%&lt;br /&gt;
|-&lt;br /&gt;
| C+	|| 2.33	|| 74.0% - 76.9%&lt;br /&gt;
|-&lt;br /&gt;
| C	|| 2.00	|| 71.0% - 73.9%&lt;br /&gt;
|-&lt;br /&gt;
| C-	|| 1.67	|| 68.0% - 70.9%&lt;br /&gt;
|-&lt;br /&gt;
| D+	|| 1.33	|| 65.0% - 67.9%&lt;br /&gt;
|-&lt;br /&gt;
| D	|| 1.00	|| 62.0% - 64.9%&lt;br /&gt;
|-&lt;br /&gt;
| F	|| 0	|| 61.9% or below&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
An interim grade of Incomplete (I) may be assigned if the student has completed some but not all requirements for the course. The default grade accompanying an interim grade of &#039;I&#039; shall be &#039;U&#039; and will be displayed on the UB record as &#039;IU.&#039; The default Unsatisfactory (U) grade shall become the permanent course grade of record if the &#039;IU&#039; is not changed through formal notice by the instructor upon the student&#039;s completion of the course.&lt;br /&gt;
&lt;br /&gt;
Assignment of an interim &#039;IU&#039; is at the discretion of the instructor. A grade of &#039;IU&#039; can be assigned only if successful completion of unfulfilled course requirements can result in a final grade better than the default &#039;U&#039; grade. The student should have a passing average in the requirements already completed. The instructor shall provide the student specification, in writing, of the requirements to be fulfilled.&lt;br /&gt;
&lt;br /&gt;
Related Policies and Services&lt;br /&gt;
Academic integrity is a fundamental university value. Through the honest completion of academic work, students sustain the integrity of the university while facilitating the university&#039;s imperative for the transmission of knowledge and culture based upon the generation of new and innovative ideas. See http://grad.buffalo.edu/Academics/Policies-Procedures/Academic-Integrity.html.&lt;br /&gt;
&lt;br /&gt;
Accessibility resources: If you have any disability which requires reasonable accommodations to enable you to participate in this course, please contact the Office of Accessibility Resources in 60 Capen Hall, 645-2608 and also the instructor of this course during the first week of class. The office will provide you with information and review appropriate arrangements for reasonable accommodations, which can be found on the web here.&lt;br /&gt;
&lt;br /&gt;
University suppert services: Students are often unaware of university support services. For example, the Center for Excellence in Writing provides support for written work, and several tutoring centers on campus provide academic success support and resources.&lt;br /&gt;
&lt;br /&gt;
Available resources on sexual assault: UB is committed to providing an environment free of all forms of discrimination and sexual harassment, including sexual assault, domestic and dating violence and stalking. If you have experienced gender-based violence (intimate partner violence, attempted or completed sexual assault, harassment, coercion, stalking, etc.), UB has resources to help. This includes academic accommodations, health and counseling services, housing accommodations, helping with legal protective orders, and assistance with reporting the incident to police or other UB officials if you so choose. Please contact UB’s Title IX Coordinator at 716-645-2266 for more information. For confidential assistance, you may also contact a Crisis Services Campus Advocate at 716-796-4399.&lt;br /&gt;
&lt;br /&gt;
Counselling services: As a student you may experience a range of issues that can cause barriers to learning or reduce your ability to participate in daily activities. These might include strained relationships, anxiety, high levels of stress, alcohol/drug problems, feeling down, health concerns, or unwanted sexual experiences. Counseling, Health Services, and Health Promotion are here to help with these or other concerns. You learn can more about these programs and services by contacting:&lt;br /&gt;
&lt;br /&gt;
:Counseling Services: 120 Richmond Quad (North Campus), phone 716-645-2720&lt;br /&gt;
:Health Services: Michael Hall (South Campus), phone: 716-829-3316&lt;br /&gt;
:Health Promotion: 114 Student Union (North Campus), phone: 716- 645-2837&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;&#039;Recommended reading&#039;&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
:Marjorie Grene, &#039;&#039;A Portrait of Aristotle&#039;&#039;&lt;br /&gt;
:R. Arp, B. Smith, A. D. Spear, &#039;&#039;[https://mitpress.mit.edu/index.php?q=books/building-ontologies-basic-formal-ontology Building Ontologies with Basic Formal Ontology]&#039;&#039;&lt;br /&gt;
:John R. Searle, &#039;&#039;Making the Social World&#039;&#039;&lt;br /&gt;
:E. J. Lowe, &#039;&#039;The Four Category Ontology&#039;&#039;&lt;br /&gt;
:Roman Ingarden, &#039;&#039;The Literary Work of Art. An Investigation on the Borderlines of Ontology, Logic, and Theory of Language&#039;&#039;&lt;/div&gt;</summary>
		<author><name>Phismith</name></author>
	</entry>
	<entry>
		<id>https://ncorwiki.buffalo.edu/index.php?title=BFO-Intro&amp;diff=75511</id>
		<title>BFO-Intro</title>
		<link rel="alternate" type="text/html" href="https://ncorwiki.buffalo.edu/index.php?title=BFO-Intro&amp;diff=75511"/>
		<updated>2025-12-28T18:57:12Z</updated>

		<summary type="html">&lt;p&gt;Phismith: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;PHI 637 Introduction to Basic Formal Ontology&lt;br /&gt;
&lt;br /&gt;
Dr. Barry Smith&lt;br /&gt;
&lt;br /&gt;
ONLINE, HYBRID, TWO CREDIT COURSE&lt;br /&gt;
&lt;br /&gt;
This course will present an introduction to Basic Formal Ontology (BFO), which is a widely used top-level ontology, approved in 2021 as international standard (ISO/IEC 21838-2). Tne course is divided into two parts. The first is asynchronous, covering the topics listed in the table below; the second is symchronous, covering (a) questions raised in the asynchronous class, and (b) working sessions, which will be designed to lead to the creation of online content, summarizing aspects of BFO and of how BFO is used, that is suitable for distribution to a wider audience. Working sessions are tentaively scheduled to take place from 7-8pm as listed in the table. Options are: videos (youtube, tiktok, &lt;br /&gt;
&lt;br /&gt;
Note that March 18 is Spring recess&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! # !! Date !! Topics || Related links ||&lt;br /&gt;
|-&lt;br /&gt;
| 1 || January 21 || market, this course STEM/Phil, history of BFO, iSO, BFO coding using LLMs || www.youtube.com/@basicformalontology470 ||&lt;br /&gt;
|-&lt;br /&gt;
| 2 || January 28 || top-level vs domain ontologies; top of BFO ||&lt;br /&gt;
|-&lt;br /&gt;
| 3 || February 4 || specific dependence, realizables ||&lt;br /&gt;
|-&lt;br /&gt;
| 4 || February 11 || material entities; object aggregates, sites, boundaries ||&lt;br /&gt;
|-&lt;br /&gt;
| 5 || February 18 || realizables , functions || https://www.youtube.com/watch?v=fkkWkTIxrNQ ||&lt;br /&gt;
|-&lt;br /&gt;
| 6 || February 25 || social wholes, dispositions and roles ||&lt;br /&gt;
|-&lt;br /&gt;
| 7 || March 4 || relations, temporalized relations ||&lt;br /&gt;
|-&lt;br /&gt;
| 8 || March 11 || processes, process profiles, changes, || last part of https://www.youtube.com/watch?v=7sbzF9p7qvk ||&lt;br /&gt;
|-&lt;br /&gt;
| 9 || March 25 || IAO, language #42 || https://www.youtube.com/watch?v=Y3btP1InPZY ||&lt;br /&gt;
|-&lt;br /&gt;
| 10 || April 1 || BFO 101 || https://www.youtube.com/watch?v=7sbzF9p7qvk ||&lt;br /&gt;
|-&lt;br /&gt;
| 11 || April 8 || DOLCE PSS || https://www.youtube.com/watch?v=XTVR7k63_VA ||&lt;br /&gt;
|-&lt;br /&gt;
| 12 || April 15 || Foundries ||&lt;br /&gt;
|-&lt;br /&gt;
| 13 || April 22 || GDCs, Ingarden, the State ||&lt;br /&gt;
|-&lt;br /&gt;
| 14 || April 29 || Synchronous question answer session ||&lt;br /&gt;
|-&lt;br /&gt;
| 15 || May 5 || Synchronous question/answer session ||&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Material for the course will be based on the following BFO tutorials, supplemented by documentation of more recent developments: &lt;br /&gt;
&lt;br /&gt;
*[https://www.youtube.com/watch?v=muafRW0bXgw&amp;amp;list=PLyngZgIl3WTj6tWcypTLpCnYXu6o93kD4&amp;amp;pp=gAQB 2015 Tutorials and Earlier Material]&lt;br /&gt;
&lt;br /&gt;
*[https://www.youtube.com/playlist?list=PLyngZgIl3WTg5f36E7r3W5px_58OOWE5I 2019 Tutorial]&lt;br /&gt;
&lt;br /&gt;
*[https://www.youtube.com/watch?v=YsdcH-yYkTI&amp;amp;list=PLyngZgIl3WThebVwYfCphjx85NOGN8BJD 2025 and Other Recent Tutorials]&lt;br /&gt;
&lt;br /&gt;
Revised versions of this tutorial material will be divided into 14 single-hour lectures which will be made available &#039;&#039;&#039;asynchronously&#039;&#039;&#039;. The lectures will form the basis for &#039;&#039;&#039;synchronous&#039;&#039;&#039; weekly working sessions tentatively scheduled for Wednesdays at 7-8pm. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Grading&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Students will be graded on the basis of &lt;br /&gt;
#Working sessions (50%) &lt;br /&gt;
#Final (50%) &#039;&#039;&#039;synchronous&#039;&#039;&#039; session, based on questions assembled by students over the course of the semester, as follows: &lt;br /&gt;
##For each asynchronous session each student should prepare exactly one single-sentence question relating to the content of this session. The answer to this question should not be contained in the video content for this session. All questions should be sent in a single email to ifomis@gmail.com on April 30. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Issues to be addressed include: &lt;br /&gt;
&lt;br /&gt;
:Reviews of BFO coding using LLMs &lt;br /&gt;
:Formulating a response to [https://buffalo.box.com/v/BFO-Expert-Coding BFO Expert Coding Challenge] - [https://scholar.google.com/scholar?cites=53308182319355410&amp;amp;as_sdt=5,33&amp;amp;sciodt=0,33&amp;amp;hl=en Citations]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Background reading&#039;&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
[https://www.iso.org/standard/74572.html ISO standard]&lt;br /&gt;
&lt;br /&gt;
[https://buffalo.box.com/s/3pyas5wwfwd2bgbe5o2dz36kncm9z5gf Building Ontologies with Basic Formal Ontology (MIT Press, 2015)]&lt;br /&gt;
&lt;br /&gt;
Here&#039;s your Wikimedia table:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! # !! Video Title !! Duration !! YouTube Link&lt;br /&gt;
|-&lt;br /&gt;
| 1 || Basic Formal Ontology 101 (July 2025) || 1:58:50 || https://www.youtube.com/watch?v=7sbzF9p7qvk&lt;br /&gt;
|-&lt;br /&gt;
| 2 || Basic Formal Ontology Tutorial (2025) || 1:46:05 || https://www.youtube.com/watch?v=GWkk5AfRCpM&lt;br /&gt;
|-&lt;br /&gt;
| 3 || The Ontology of Science || 1:06:08 || https://www.youtube.com/watch?v=PwsBxRs9kns&lt;br /&gt;
|-&lt;br /&gt;
| 5 || Basic Formal Ontology (BFO), July 2023 || 10:20 || https://www.youtube.com/watch?v=uflMfvI-ZxI&lt;br /&gt;
|-&lt;br /&gt;
| 6 || The Ontology of (Supply Chain) Services || 11:35 || https://www.youtube.com/watch?v=F1Zlunh3eMw&lt;br /&gt;
|-&lt;br /&gt;
| 7 || Industrial Ontologies Foundry (2022) || 7:52 || https://www.youtube.com/watch?v=1pfsimHTApU&lt;br /&gt;
|-&lt;br /&gt;
| 8 || Ontology of (Social) Services || 10:38 || https://www.youtube.com/watch?v=9qrwWAISrC8&lt;br /&gt;
|-&lt;br /&gt;
| 9 || Ontology Foundries || 20:51 || https://www.youtube.com/watch?v=iFiwmq7f4wQ&lt;br /&gt;
|-&lt;br /&gt;
| 10 || ISO/IEC 21838 Top Level Ontologies (November 2021) || 10:57 || https://www.youtube.com/watch?v=YsdcH-yYkTI&lt;br /&gt;
|-&lt;br /&gt;
| 11 || Realizable Entities in Basic Formal Ontology || 1:36:36 || https://www.youtube.com/watch?v=PJaEYdF9ikE&lt;br /&gt;
|-&lt;br /&gt;
| 12 || How to handle data about what does not exist || 7:43 || https://www.youtube.com/watch?v=ai4YdLiCGNM&lt;br /&gt;
|-&lt;br /&gt;
| 13 || ISO/IEC 21838 || 10:00 || https://www.youtube.com/watch?v=aux_zcK7XSI&lt;br /&gt;
|-&lt;br /&gt;
| 14 || Reasoning with the Information Artifact Ontology || 7:47 || https://www.youtube.com/watch?v=sTx_rRWmTqE&lt;br /&gt;
|-&lt;br /&gt;
| 15 || BFO 2020 Temporalized Relations || 34:10 || https://www.youtube.com/watch?v=fkkWkTIxrNQ&lt;br /&gt;
|-&lt;br /&gt;
| 16 || ISO/IEC 21838 || 1:32:41 || https://www.youtube.com/watch?v=_0masZPGLb0&lt;br /&gt;
|-&lt;br /&gt;
| 17 || What problem with OWL is BFO-2020 trying to solve || 28:04 || https://www.youtube.com/watch?v=IDs7Pthdows&lt;br /&gt;
|-&lt;br /&gt;
| 18 || Ontologies for Space and Ground Systems || 29:05 || https://www.youtube.com/watch?v=x3ugXHOyLLw&lt;br /&gt;
|-&lt;br /&gt;
| 20 || BFO JOWO Tutorial Part 2 || 1:10:53 || https://www.youtube.com/watch?v=wh_KZGXc1Es&lt;br /&gt;
|-&lt;br /&gt;
| 21 || BFO JOWO Tutorial Part 1 || 23:27 || https://www.youtube.com/watch?v=VYDe09TOw2M&lt;br /&gt;
|-&lt;br /&gt;
| 22 || Introduction to Basic Formal Ontology (September 2019) || 8:51 || https://www.youtube.com/watch?v=p0buEjR3t8A&lt;br /&gt;
|-&lt;br /&gt;
| 23 || Ontology as Product-Service System: A Study of GO, BFO and DOLCE || 11:29 || https://www.youtube.com/watch?v=XTVR7k63_VA&lt;br /&gt;
|-&lt;br /&gt;
| 24 || BFO Tutorial (2019). Part 5: BFO as Top-Level Ontology || 21:16 || https://www.youtube.com/watch?v=ZMUM1z2Zi9c&lt;br /&gt;
|-&lt;br /&gt;
| 25 || BFO Tutorial (2019). Part 6: Temporalized Relations in BFO ISO || 21:55 || https://www.youtube.com/watch?v=8-dGGDQ7qCw&lt;br /&gt;
|-&lt;br /&gt;
| 26 || BFO Tutorial (2019). Part 4: Sites, Boundaries, Objects || 19:45 || https://www.youtube.com/watch?v=GJJcu0UKQyo&lt;br /&gt;
|-&lt;br /&gt;
| 27 || BFO Tutorial (2019). Part 3: Qualities, Dispositions, Diseases || 24:37 || https://www.youtube.com/watch?v=2UmKWQ-fH4s&lt;br /&gt;
|-&lt;br /&gt;
| 28 || BFO Tutorial (2019). Part 2: Why Ontologies Fail || 39:43 || https://www.youtube.com/watch?v=w5d5KmBqw3w&lt;br /&gt;
|-&lt;br /&gt;
| 29 || BFO Tutorial (2019). Part 1: Introduction to BFO ISO || 41:11 || https://www.youtube.com/watch?v=muafRW0bXgw&lt;br /&gt;
|-&lt;br /&gt;
| 30 || Basic Formal Ontology Applied to the Ontology of Language. With a coda on the Turing Test || 39:42 || https://www.youtube.com/watch?v=Y3btP1InPZY&lt;br /&gt;
|-&lt;br /&gt;
| 31 || IOF: Draft BFO Formalization Proposal. 1-25-2019 || 31:06 || https://www.youtube.com/watch?v=ZJgE-O2iREM&lt;br /&gt;
|-&lt;br /&gt;
| 36 || How BFO Deals with Data from Multiple Contexts || 16:31 || https://www.youtube.com/watch?v=K9AsCDBRJpM&lt;br /&gt;
|-&lt;br /&gt;
| 37 || Why Do We Need Upper-Level Ontologies? || 20:47 || https://www.youtube.com/watch?v=sjf9zeCh_Sw&lt;br /&gt;
|-&lt;br /&gt;
| 38 || Relationships between upper-level ontologies || 1:02:25 || https://www.youtube.com/watch?v=gJxfZ3cq5jE&lt;br /&gt;
|-&lt;br /&gt;
| 39 || Functions, Dispositions and Capabilities (2017) || 31:15 || https://www.youtube.com/watch?v=lIPg2bGJSzE&lt;br /&gt;
|-&lt;br /&gt;
| 40 || Are there Capabilities on Mars? || 1:30:51 || https://www.youtube.com/watch?v=Lo7iPP2wKgw&lt;br /&gt;
|-&lt;br /&gt;
| 41 || Introduction to BFO and to the Industrial Ontologies Foundry || 47:16 || https://www.youtube.com/watch?v=fJ4uW7PK5cI&lt;br /&gt;
|-&lt;br /&gt;
| 42 || Building Ontologies: An Introduction for Engineers (Part 2) || 53:01 || https://www.youtube.com/watch?v=8vdUUhF4JdE&lt;br /&gt;
|-&lt;br /&gt;
| 43 || Building Ontologies: An Introduction for Engineers (Part 1) || 51:30 || https://www.youtube.com/watch?v=HDARyJBvnuc&lt;br /&gt;
|-&lt;br /&gt;
| 44 || Building Ontologies: An Introduction for Engineers (Part 2) || 1:44:30 || https://www.youtube.com/watch?v=Gh0f2Us0hr0&lt;br /&gt;
|-&lt;br /&gt;
| 45 || Building Ontologies: An Introduction for Engineers (Part 1) || 54:17 || https://www.youtube.com/watch?v=iTNQYyh88-Y&lt;br /&gt;
|-&lt;br /&gt;
| 46 || Introduction to Basic Formal Ontology (2015): Part One || 7:48 || https://www.youtube.com/watch?v=IMCBON2me3Y&lt;br /&gt;
|-&lt;br /&gt;
| 47 || Introduction to Basic Formal Ontology (2015): Part Two || 1:44:29 || https://www.youtube.com/watch?v=bGPVCkuKTo4&lt;br /&gt;
|-&lt;br /&gt;
| 48 || Tutorial: Introduction to Basic Formal Ontology 2.0 (2015) ||54:16 || https://www.youtube.com/watch?v=Yl6_M1sQEAQ&lt;br /&gt;
|-&lt;br /&gt;
| 49 || Introduction to Basic Formal Ontology (BFO) 2012 ||7:14 || https://www.youtube.com/watch?v=FjOgoKvNNMM (BAD QUALITY)&lt;br /&gt;
|-&lt;br /&gt;
| 50 || Part1: Changes in BFO 2.0, by BarrySmith || N/A || N/A&lt;br /&gt;
|-&lt;br /&gt;
| 51 || Aboutness || 21:44 || https://www.youtube.com/watch?v=PBKsupBquok&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Phismith</name></author>
	</entry>
	<entry>
		<id>https://ncorwiki.buffalo.edu/index.php?title=BFO-Intro&amp;diff=75510</id>
		<title>BFO-Intro</title>
		<link rel="alternate" type="text/html" href="https://ncorwiki.buffalo.edu/index.php?title=BFO-Intro&amp;diff=75510"/>
		<updated>2025-12-28T18:53:35Z</updated>

		<summary type="html">&lt;p&gt;Phismith: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;PHI 637 Introduction to Basic Formal Ontology&lt;br /&gt;
&lt;br /&gt;
Dr. Barry Smith&lt;br /&gt;
&lt;br /&gt;
ONLINE, HYBRID, TWO CREDIT COURSE&lt;br /&gt;
&lt;br /&gt;
This course will present an introduction to Basic Formal Ontology (BFO), which is a widely used top-level ontology, approved in 2021 as international standard (ISO/IEC 21838-2). Tne course is divided into two parts. The first is asynchronous, covering the topics listed in the table below; the second is symchronous, covering (a) questions raised in the asynchronous class, and (b) working sessions, which will be designed to lead to the creation of online content, summarizing aspects of BFO and of how BFO is used, that is suitable for distribution to a wider audience. Working sessions are tentaively scheduled to take place from 7-8pm as listed in the table. Options are: videos (youtube, tiktok, &lt;br /&gt;
&lt;br /&gt;
Note that March 18 is Spring recess&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! # !! Date !! Topics || Related links ||&lt;br /&gt;
|-&lt;br /&gt;
| 1 || January 21 || market, this course STEM/Phil, history of BFO, iSO, BFO coding using LLMs || www.youtube.com/@basicformalontology470 ||&lt;br /&gt;
|-&lt;br /&gt;
| 2 || January 28 || top-level vs domain ontologies; top of BFO ||&lt;br /&gt;
|-&lt;br /&gt;
| 3 || February 4 || specific dependence, realizables ||&lt;br /&gt;
|-&lt;br /&gt;
| 4 || February 11 || material entities; object aggregates, sites, boundaries ||&lt;br /&gt;
|-&lt;br /&gt;
| 5 || February 18 || realizables , functions || https://www.youtube.com/watch?v=fkkWkTIxrNQ ||&lt;br /&gt;
|-&lt;br /&gt;
| 6 || February 25 || social wholes, dispositions and roles ||&lt;br /&gt;
|-&lt;br /&gt;
| 7 || March 4 || relations, temporalized relations ||&lt;br /&gt;
|-&lt;br /&gt;
| 8 || March 11 || processes, process profiles, changes, || last part of https://www.youtube.com/watch?v=7sbzF9p7qvk ||&lt;br /&gt;
|-&lt;br /&gt;
| 9 || March 25 || IAO, language #42 || https://www.youtube.com/watch?v=Y3btP1InPZY ||&lt;br /&gt;
|-&lt;br /&gt;
| 10 || April 1 || BFO 101 || https://www.youtube.com/watch?v=7sbzF9p7qvk ||&lt;br /&gt;
|-&lt;br /&gt;
| 11 || April 8 || DOLCE PSS || https://www.youtube.com/watch?v=XTVR7k63_VA ||&lt;br /&gt;
|-&lt;br /&gt;
| 12 || April 15 || Foundries ||&lt;br /&gt;
|-&lt;br /&gt;
| 13 || April 22 || GDCs, Ingarden, the State ||&lt;br /&gt;
|-&lt;br /&gt;
| 14 || April 29 || Synchronous question answer session ||&lt;br /&gt;
|-&lt;br /&gt;
| 15 || May 5 || Synchronous question/answer session ||&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Material for the course will be based on the following BFO tutorials, supplemented by documentation of more recent developments: &lt;br /&gt;
&lt;br /&gt;
*[https://www.youtube.com/watch?v=muafRW0bXgw&amp;amp;list=PLyngZgIl3WTj6tWcypTLpCnYXu6o93kD4&amp;amp;pp=gAQB 2015 Tutorials and Earlier Material]&lt;br /&gt;
&lt;br /&gt;
*[https://www.youtube.com/playlist?list=PLyngZgIl3WTg5f36E7r3W5px_58OOWE5I 2019 Tutorial]&lt;br /&gt;
&lt;br /&gt;
*[https://www.youtube.com/watch?v=YsdcH-yYkTI&amp;amp;list=PLyngZgIl3WThebVwYfCphjx85NOGN8BJD 2025 and Other Recent Tutorials]&lt;br /&gt;
&lt;br /&gt;
Revised versions of this tutorial material will be divided into 14 single-hour lectures which will be made available &#039;&#039;&#039;asynchronously&#039;&#039;&#039;. The lectures will form the basis for &#039;&#039;&#039;synchronous&#039;&#039;&#039; weekly working sessions tentatively scheduled for Wednesdays at 7-8pm. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Grading&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Students will be graded on the basis of &lt;br /&gt;
#Working sessions (50%) &lt;br /&gt;
#Final (50%) &#039;&#039;&#039;synchronous&#039;&#039;&#039; session, based on questions assembled by students over the course of the semester, as follows: &lt;br /&gt;
##For each asynchronous session each student should prepare exactly one single-sentence question relating to the content of this session. The answer to this question should not be contained in the video content for this session. All questions should be sent in a single email to ifomis@gmail.com on April 30. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Issues to be addressed include: &lt;br /&gt;
&lt;br /&gt;
:Reviews of BFO coding using LLMs &lt;br /&gt;
:Formulating a response to [https://buffalo.box.com/v/BFO-Expert-Coding BFO Expert Coding Challenge] - [https://scholar.google.com/scholar?cites=53308182319355410&amp;amp;as_sdt=5,33&amp;amp;sciodt=0,33&amp;amp;hl=en Citations]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Background reading&#039;&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
[https://www.iso.org/standard/74572.html ISO standard]&lt;br /&gt;
&lt;br /&gt;
[https://buffalo.box.com/s/3pyas5wwfwd2bgbe5o2dz36kncm9z5gf Building Ontologies with Basic Formal Ontology (MIT Press, 2015)]&lt;br /&gt;
&lt;br /&gt;
Here&#039;s your Wikimedia table:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! # !! Video Title !! Duration !! YouTube Link&lt;br /&gt;
|-&lt;br /&gt;
| 1 || Basic Formal Ontology 101 (July 2025) || 1:58:50 || https://www.youtube.com/watch?v=7sbzF9p7qvk&lt;br /&gt;
|-&lt;br /&gt;
| 2 || Basic Formal Ontology Tutorial (2025) || 1:46:05 || https://www.youtube.com/watch?v=GWkk5AfRCpM&lt;br /&gt;
|-&lt;br /&gt;
| 3 || The Ontology of Science || 1:06:08 || https://www.youtube.com/watch?v=PwsBxRs9kns&lt;br /&gt;
|-&lt;br /&gt;
| 5 || Basic Formal Ontology (BFO), July 2023 || 10:20 || https://www.youtube.com/watch?v=uflMfvI-ZxI&lt;br /&gt;
|-&lt;br /&gt;
| 6 || The Ontology of (Supply Chain) Services || 11:35 || https://www.youtube.com/watch?v=F1Zlunh3eMw&lt;br /&gt;
|-&lt;br /&gt;
| 7 || Industrial Ontologies Foundry (2022) || 7:52 || https://www.youtube.com/watch?v=1pfsimHTApU&lt;br /&gt;
|-&lt;br /&gt;
| 8 || Ontology of (Social) Services || 10:38 || https://www.youtube.com/watch?v=9qrwWAISrC8&lt;br /&gt;
|-&lt;br /&gt;
| 9 || Ontology Foundries || 20:51 || https://www.youtube.com/watch?v=iFiwmq7f4wQ&lt;br /&gt;
|-&lt;br /&gt;
| 10 || ISO/IEC 21838 Top Level Ontologies (November 2021) || 10:57 || https://www.youtube.com/watch?v=YsdcH-yYkTI&lt;br /&gt;
|-&lt;br /&gt;
| 11 || Realizable Entities in Basic Formal Ontology || 1:36:36 || https://www.youtube.com/watch?v=PJaEYdF9ikE&lt;br /&gt;
|-&lt;br /&gt;
| 12 || How to handle data about what does not exist || 7:43 || https://www.youtube.com/watch?v=ai4YdLiCGNM&lt;br /&gt;
|-&lt;br /&gt;
| 13 || ISO/IEC 21838 || 10:00 || https://www.youtube.com/watch?v=aux_zcK7XSI&lt;br /&gt;
|-&lt;br /&gt;
| 14 || Reasoning with the Information Artifact Ontology || 7:47 || https://www.youtube.com/watch?v=sTx_rRWmTqE&lt;br /&gt;
|-&lt;br /&gt;
| 15 || BFO 2020 Temporalized Relations || 34:10 || https://www.youtube.com/watch?v=fkkWkTIxrNQ&lt;br /&gt;
|-&lt;br /&gt;
| 16 || ISO/IEC 21838 || 1:32:41 || https://www.youtube.com/watch?v=_0masZPGLb0&lt;br /&gt;
|-&lt;br /&gt;
| 17 || What problem with OWL is BFO-2020 trying to solve || 28:04 || https://www.youtube.com/watch?v=IDs7Pthdows&lt;br /&gt;
|-&lt;br /&gt;
| 18 || Ontologies for Space and Ground Systems || 29:05 || https://www.youtube.com/watch?v=x3ugXHOyLLw&lt;br /&gt;
|-&lt;br /&gt;
| 20 || BFO JOWO Tutorial Part 2 || 1:10:53 || https://www.youtube.com/watch?v=wh_KZGXc1Es&lt;br /&gt;
|-&lt;br /&gt;
| 21 || BFO JOWO Tutorial Part 1 || 23:27 || https://www.youtube.com/watch?v=VYDe09TOw2M&lt;br /&gt;
|-&lt;br /&gt;
| 22 || Introduction to Basic Formal Ontology (September 2019) || 8:51 || https://www.youtube.com/watch?v=p0buEjR3t8A&lt;br /&gt;
|-&lt;br /&gt;
| 23 || Ontology as Product-Service System: A Study of GO, BFO and DOLCE || 11:29 || https://www.youtube.com/watch?v=XTVR7k63_VA&lt;br /&gt;
|-&lt;br /&gt;
| 24 || BFO Tutorial (2019). Part 5: BFO as Top-Level Ontology || 21:16 || https://www.youtube.com/watch?v=ZMUM1z2Zi9c&lt;br /&gt;
|-&lt;br /&gt;
| 25 || BFO Tutorial (2019). Part 6: Temporalized Relations in BFO ISO || 21:55 || https://www.youtube.com/watch?v=8-dGGDQ7qCw&lt;br /&gt;
|-&lt;br /&gt;
| 26 || BFO Tutorial (2019). Part 4: Sites, Boundaries, Objects || 19:45 || https://www.youtube.com/watch?v=GJJcu0UKQyo&lt;br /&gt;
|-&lt;br /&gt;
| 27 || BFO Tutorial (2019). Part 3: Qualities, Dispositions, Diseases || 24:37 || https://www.youtube.com/watch?v=2UmKWQ-fH4s&lt;br /&gt;
|-&lt;br /&gt;
| 28 || BFO Tutorial (2019). Part 2: Why Ontologies Fail || 39:43 || https://www.youtube.com/watch?v=w5d5KmBqw3w&lt;br /&gt;
|-&lt;br /&gt;
| 29 || BFO Tutorial (2019). Part 1: Introduction to BFO ISO || 41:11 || https://www.youtube.com/watch?v=muafRW0bXgw&lt;br /&gt;
|-&lt;br /&gt;
| 30 || Basic Formal Ontology Applied to the Ontology of Language. With a coda on the Turing Test || 39:42 || https://www.youtube.com/watch?v=Y3btP1InPZY&lt;br /&gt;
|-&lt;br /&gt;
| 31 || IOF: Draft BFO Formalization Proposal. 1-25-2019 || 31:06 || https://www.youtube.com/watch?v=ZJgE-O2iREM&lt;br /&gt;
|-&lt;br /&gt;
| 36 || How BFO Deals with Data from Multiple Contexts || 16:31 || https://www.youtube.com/watch?v=K9AsCDBRJpM&lt;br /&gt;
|-&lt;br /&gt;
| 37 || Why Do We Need Upper-Level Ontologies? || 20:47 || https://www.youtube.com/watch?v=sjf9zeCh_Sw&lt;br /&gt;
|-&lt;br /&gt;
| 38 || Relationships between upper-level ontologies || 1:02:25 || https://www.youtube.com/watch?v=gJxfZ3cq5jE&lt;br /&gt;
|-&lt;br /&gt;
| 39 || Functions, Dispositions and Capabilities (2017) || 31:15 || https://www.youtube.com/watch?v=lIPg2bGJSzE&lt;br /&gt;
|-&lt;br /&gt;
| 40 || Are there Capabilities on Mars? || 1:30:51 || https://www.youtube.com/watch?v=Lo7iPP2wKgw&lt;br /&gt;
|-&lt;br /&gt;
| 41 || Introduction to BFO and to the Industrial Ontologies Foundry || 47:16 || https://www.youtube.com/watch?v=fJ4uW7PK5cI&lt;br /&gt;
|-&lt;br /&gt;
| 42 || Building Ontologies: An Introduction for Engineers (Part 2) || 53:01 || https://www.youtube.com/watch?v=8vdUUhF4JdE&lt;br /&gt;
|-&lt;br /&gt;
| 43 || Building Ontologies: An Introduction for Engineers (Part 1) || 51:30 || https://www.youtube.com/watch?v=HDARyJBvnuc&lt;br /&gt;
|-&lt;br /&gt;
| 44 || Building Ontologies: An Introduction for Engineers (Part 2) || 1:44:30 || https://www.youtube.com/watch?v=Gh0f2Us0hr0&lt;br /&gt;
|-&lt;br /&gt;
| 45 || Building Ontologies: An Introduction for Engineers (Part 1) || 54:17 || https://www.youtube.com/watch?v=iTNQYyh88-Y&lt;br /&gt;
|-&lt;br /&gt;
| 46 || Introduction to Basic Formal Ontology (2015): Part One || 7:48 || https://www.youtube.com/watch?v=IMCBON2me3Y&lt;br /&gt;
|-&lt;br /&gt;
| 47 || Introduction to Basic Formal Ontology (2015): Part Two || N/A || https://www.youtube.com/watch?v=bGPVCkuKTo4&lt;br /&gt;
|-&lt;br /&gt;
| 48 || Tutorial: Introduction to Basic Formal Ontology 2.0 (2015) || N/A || https://www.youtube.com/watch?v=Yl6_M1sQEAQ&lt;br /&gt;
|-&lt;br /&gt;
| 49 || Introduction to Basic Formal Ontology (BFO) 2012 || N/A || https://www.youtube.com/watch?v=FjOgoKvNNMM&lt;br /&gt;
|-&lt;br /&gt;
| 50 || Part1: Changes in BFO 2.0, by BarrySmith || N/A || &lt;br /&gt;
|-&lt;br /&gt;
| || Aboutness || 21:44 || https://www.youtube.com/watch?v=PBKsupBquok&lt;br /&gt;
|}&lt;br /&gt;
I reordered the entries numerically and fixed a couple of duration typos (21.44 → 21:44, 39.42 → 39:42). The &amp;quot;Aboutness&amp;quot; entry without a number is placed at the end.Claude is AI and can make mistakes. Please double-check responses.&lt;/div&gt;</summary>
		<author><name>Phismith</name></author>
	</entry>
	<entry>
		<id>https://ncorwiki.buffalo.edu/index.php?title=BFO-Intro&amp;diff=75509</id>
		<title>BFO-Intro</title>
		<link rel="alternate" type="text/html" href="https://ncorwiki.buffalo.edu/index.php?title=BFO-Intro&amp;diff=75509"/>
		<updated>2025-12-28T17:10:18Z</updated>

		<summary type="html">&lt;p&gt;Phismith: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;PHI 637 Introduction to Basic Formal Ontology&lt;br /&gt;
&lt;br /&gt;
Dr. Barry Smith&lt;br /&gt;
&lt;br /&gt;
ONLINE, HYBRID, TWO CREDIT COURSE&lt;br /&gt;
&lt;br /&gt;
This course will present an introduction to Basic Formal Ontology (BFO), which is a widely used top-level ontology, approved in 2021 as international standard (ISO/IEC 21838-2). Tne course is divided into two parts. The first is asynchronous, covering the topics listed in the table below; the second is symchronous, covering (a) questions raised in the asynchronous class, and (b) working sessions, which will be designed to lead to the creation of online content, summarizing aspects of BFO and of how BFO is used, that is suitable for distribution to a wider audience. Working sessions are tentaively scheduled to take place from 7-8pm as listed in the table. Options are: videos (youtube, tiktok, &lt;br /&gt;
&lt;br /&gt;
Note that March 18 is Spring recess&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! # !! Date !! Topics || Related links ||&lt;br /&gt;
|-&lt;br /&gt;
| 1 || January 21 || market, this course STEM/Phil, history of BFO, iSO, BFO coding using LLMs || www.youtube.com/@basicformalontology470 ||&lt;br /&gt;
|-&lt;br /&gt;
| 2 || January 28 || top-level vs domain ontologies; top of BFO ||&lt;br /&gt;
|-&lt;br /&gt;
| 3 || February 4 || specific dependence, realizables ||&lt;br /&gt;
|-&lt;br /&gt;
| 4 || February 11 || material entities; object aggregates, sites, boundaries ||&lt;br /&gt;
|-&lt;br /&gt;
| 5 || February 18 || realizables , functions || https://www.youtube.com/watch?v=fkkWkTIxrNQ ||&lt;br /&gt;
|-&lt;br /&gt;
| 6 || February 25 || social wholes, dispositions and roles ||&lt;br /&gt;
|-&lt;br /&gt;
| 7 || March 4 || relations, temporalized relations ||&lt;br /&gt;
|-&lt;br /&gt;
| 8 || March 11 || processes, process profiles, changes, || last part of https://www.youtube.com/watch?v=7sbzF9p7qvk ||&lt;br /&gt;
|-&lt;br /&gt;
| 9 || March 25 || IAO, language #42 || https://www.youtube.com/watch?v=Y3btP1InPZY ||&lt;br /&gt;
|-&lt;br /&gt;
| 10 || April 1 || BFO 101 || https://www.youtube.com/watch?v=7sbzF9p7qvk ||&lt;br /&gt;
|-&lt;br /&gt;
| 11 || April 8 || DOLCE PSS || https://www.youtube.com/watch?v=XTVR7k63_VA ||&lt;br /&gt;
|-&lt;br /&gt;
| 12 || April 15 || Foundries ||&lt;br /&gt;
|-&lt;br /&gt;
| 13 || April 22 || GDCs, Ingarden, the State ||&lt;br /&gt;
|-&lt;br /&gt;
| 14 || April 29 || Synchronous question answer session ||&lt;br /&gt;
|-&lt;br /&gt;
| 15 || May 5 || Synchronous question/answer session ||&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Material for the course will be based on the following BFO tutorials, supplemented by documentation of more recent developments: &lt;br /&gt;
&lt;br /&gt;
*[https://www.youtube.com/watch?v=muafRW0bXgw&amp;amp;list=PLyngZgIl3WTj6tWcypTLpCnYXu6o93kD4&amp;amp;pp=gAQB 2015 Tutorials and Earlier Material]&lt;br /&gt;
&lt;br /&gt;
*[https://www.youtube.com/playlist?list=PLyngZgIl3WTg5f36E7r3W5px_58OOWE5I 2019 Tutorial]&lt;br /&gt;
&lt;br /&gt;
*[https://www.youtube.com/watch?v=YsdcH-yYkTI&amp;amp;list=PLyngZgIl3WThebVwYfCphjx85NOGN8BJD 2025 and Other Recent Tutorials]&lt;br /&gt;
&lt;br /&gt;
Revised versions of this tutorial material will be divided into 14 single-hour lectures which will be made available &#039;&#039;&#039;asynchronously&#039;&#039;&#039;. The lectures will form the basis for &#039;&#039;&#039;synchronous&#039;&#039;&#039; weekly working sessions tentatively scheduled for Wednesdays at 7-8pm. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Grading&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Students will be graded on the basis of &lt;br /&gt;
#Working sessions (50%) &lt;br /&gt;
#Final (50%) &#039;&#039;&#039;synchronous&#039;&#039;&#039; session, based on questions assembled by students over the course of the semester, as follows: &lt;br /&gt;
##For each asynchronous session each student should prepare exactly one single-sentence question relating to the content of this session. The answer to this question should not be contained in the video content for this session. All questions should be sent in a single email to ifomis@gmail.com on April 30. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Issues to be addressed include: &lt;br /&gt;
&lt;br /&gt;
:Reviews of BFO coding using LLMs &lt;br /&gt;
:Formulating a response to [https://buffalo.box.com/v/BFO-Expert-Coding BFO Expert Coding Challenge] - [https://scholar.google.com/scholar?cites=53308182319355410&amp;amp;as_sdt=5,33&amp;amp;sciodt=0,33&amp;amp;hl=en Citations]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Background reading&#039;&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
[https://www.iso.org/standard/74572.html ISO standard]&lt;br /&gt;
&lt;br /&gt;
[https://buffalo.box.com/s/3pyas5wwfwd2bgbe5o2dz36kncm9z5gf Building Ontologies with Basic Formal Ontology (MIT Press, 2015)]&lt;/div&gt;</summary>
		<author><name>Phismith</name></author>
	</entry>
	<entry>
		<id>https://ncorwiki.buffalo.edu/index.php?title=BFO-Intro&amp;diff=75508</id>
		<title>BFO-Intro</title>
		<link rel="alternate" type="text/html" href="https://ncorwiki.buffalo.edu/index.php?title=BFO-Intro&amp;diff=75508"/>
		<updated>2025-12-28T17:03:00Z</updated>

		<summary type="html">&lt;p&gt;Phismith: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;PHI 637 Introduction to Basic Formal Ontology&lt;br /&gt;
&lt;br /&gt;
Dr. Barry Smith&lt;br /&gt;
&lt;br /&gt;
ONLINE, HYBRID, TWO CREDIT COURSE&lt;br /&gt;
&lt;br /&gt;
This course will present an introduction to Basic Formal Ontology (BFO), which is a widely used top-level ontology, approved in 2021 as international standard (ISO/IEC 21838-2).&lt;br /&gt;
&lt;br /&gt;
Note that March 18 is Spring recess&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! # !! Date !! Topics || Related links ||&lt;br /&gt;
|-&lt;br /&gt;
| 1 || January 21 || market, this course STEM/Phil, history of BFO, iSO, Mil, DHS mention || www.youtube.com/@basicformalontology470 ||&lt;br /&gt;
|-&lt;br /&gt;
| 2 || January 28 || top-level vs domain ontologies; top of BFO ||&lt;br /&gt;
|-&lt;br /&gt;
| 3 || February 4 || specific dependence, realizables ||&lt;br /&gt;
|-&lt;br /&gt;
| 4 || February 11 || material entities; object aggregates, sites, boundaries ||&lt;br /&gt;
|-&lt;br /&gt;
| 5 || February 18 || realizables , functions || https://www.youtube.com/watch?v=fkkWkTIxrNQ ||&lt;br /&gt;
|-&lt;br /&gt;
| 6 || February 25 || social wholes, dispositions and roles ||&lt;br /&gt;
|-&lt;br /&gt;
| 7 || March 4 || relations, temporalized relations ||&lt;br /&gt;
|-&lt;br /&gt;
| 8 || March 11 || processes, process profiles, changes, || last part of https://www.youtube.com/watch?v=7sbzF9p7qvk ||&lt;br /&gt;
|-&lt;br /&gt;
| 9 || March 25 || IAO, language #42 || https://www.youtube.com/watch?v=Y3btP1InPZY ||&lt;br /&gt;
|-&lt;br /&gt;
| 10 || April 1 || BFO 101 || https://www.youtube.com/watch?v=7sbzF9p7qvk ||&lt;br /&gt;
|-&lt;br /&gt;
| 11 || April 8 || DOLCE PSS || https://www.youtube.com/watch?v=XTVR7k63_VA ||&lt;br /&gt;
|-&lt;br /&gt;
| 12 || April 15 || Foundries ||&lt;br /&gt;
|-&lt;br /&gt;
| 13 || April 22 || GDCs, Ingarden, the State ||&lt;br /&gt;
|-&lt;br /&gt;
| 14 || April 29 || Synchronous question answer session ||&lt;br /&gt;
|-&lt;br /&gt;
| 15 || May 5 || Synchronous question/answer session ||&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Material for the course will be based on the following BFO tutorials, supplemented by documentation of more recent developments: &lt;br /&gt;
&lt;br /&gt;
*[https://www.youtube.com/watch?v=muafRW0bXgw&amp;amp;list=PLyngZgIl3WTj6tWcypTLpCnYXu6o93kD4&amp;amp;pp=gAQB 2015 Tutorials and Earlier Material]&lt;br /&gt;
&lt;br /&gt;
*[https://www.youtube.com/playlist?list=PLyngZgIl3WTg5f36E7r3W5px_58OOWE5I 2019 Tutorial]&lt;br /&gt;
&lt;br /&gt;
*[https://www.youtube.com/watch?v=YsdcH-yYkTI&amp;amp;list=PLyngZgIl3WThebVwYfCphjx85NOGN8BJD 2025 and Other Recent Tutorials]&lt;br /&gt;
&lt;br /&gt;
Revised versions of this tutorial material will be divided into 14 single-hour lectures which will be made available &#039;&#039;&#039;asynchronously&#039;&#039;&#039;. The lectures will form the basis for &#039;&#039;&#039;synchronous&#039;&#039;&#039; weekly working sessions tentatively scheduled for Wednesdays at 7-8pm. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Grading&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Students will be graded on the basis of &lt;br /&gt;
#Working sessions (50%) their contributions to these working sessions, which will be designed to lead to the creation of online content, summarizing aspects of BFO and of how BFO is used, that is suitable for distribution to a wider audience. &lt;br /&gt;
#Final (50%) &#039;&#039;&#039;synchronous&#039;&#039;&#039; session, based on questions assembled by students over the course of the semester, as follows: &lt;br /&gt;
##For each asynchronous session each student should prepare exactly one single-sentence question relating to the content of this session. The answer to this question should not be contained in the video content for this session. All questions should be sent in a single email to ifomis@gmail.com on April 30. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Issues to be addressed include: &lt;br /&gt;
&lt;br /&gt;
:Reviews of BFO coding using LLMs &lt;br /&gt;
:Formulating a response to [https://buffalo.box.com/v/BFO-Expert-Coding BFO Expert Coding Challenge] - [https://scholar.google.com/scholar?cites=53308182319355410&amp;amp;as_sdt=5,33&amp;amp;sciodt=0,33&amp;amp;hl=en Citations]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Background reading&#039;&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
[https://www.iso.org/standard/74572.html ISO standard]&lt;br /&gt;
&lt;br /&gt;
[https://buffalo.box.com/s/3pyas5wwfwd2bgbe5o2dz36kncm9z5gf Building Ontologies with Basic Formal Ontology (MIT Press, 2015)]&lt;/div&gt;</summary>
		<author><name>Phismith</name></author>
	</entry>
	<entry>
		<id>https://ncorwiki.buffalo.edu/index.php?title=BFO-Intro&amp;diff=75507</id>
		<title>BFO-Intro</title>
		<link rel="alternate" type="text/html" href="https://ncorwiki.buffalo.edu/index.php?title=BFO-Intro&amp;diff=75507"/>
		<updated>2025-12-28T17:01:11Z</updated>

		<summary type="html">&lt;p&gt;Phismith: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;PHI 637 Introduction to Basic Formal Ontology&lt;br /&gt;
&lt;br /&gt;
Dr. Barry Smith&lt;br /&gt;
&lt;br /&gt;
ONLINE, HYBRID, TWO CREDIT COURSE&lt;br /&gt;
&lt;br /&gt;
This course will present an introduction to Basic Formal Ontology (BFO), which is a widely used top-level ontology, approved in 2021 as international standard (ISO/IEC 21838-2).&lt;br /&gt;
&lt;br /&gt;
Note that March 18 is Spring recess&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! # !! Date !! Topics || Related links&lt;br /&gt;
|-&lt;br /&gt;
| 1 || January 21 || market, this course STEM/Phil, history of BFO, iSO, Mil, DHS mention || www.youtube.com/@basicformalontology470 ||&lt;br /&gt;
|-&lt;br /&gt;
| 2 || January 28 || top-level vs domain ontologies; top of BFO ||&lt;br /&gt;
|-&lt;br /&gt;
| 3 || February 4 || specific dependence, realizables ||&lt;br /&gt;
|-&lt;br /&gt;
| 4 || February 11 || material entities; object aggregates, sites, boundaries ||&lt;br /&gt;
|-&lt;br /&gt;
| 5 || February 18 || realizables , functions || https://www.youtube.com/watch?v=fkkWkTIxrNQ ||&lt;br /&gt;
|-&lt;br /&gt;
| 6 || February 25 || social wholes, dispositions and roles ||&lt;br /&gt;
|-&lt;br /&gt;
| 7 || March 4 || relations, temporalized relations ||&lt;br /&gt;
|-&lt;br /&gt;
| 8 || March 11 || processes, process profiles, changes, || last part of https://www.youtube.com/watch?v=7sbzF9p7qvk ||&lt;br /&gt;
|-&lt;br /&gt;
| 9 || March 25 || IAO, language #42 ||&lt;br /&gt;
|-&lt;br /&gt;
| 10 || April 1 || BFO 101 || https://www.youtube.com/watch?v=7sbzF9p7qvk ||&lt;br /&gt;
|-&lt;br /&gt;
| 11 || April 8 || DOLCE PSS || https://www.youtube.com/watch?v=XTVR7k63_VA ||&lt;br /&gt;
|-&lt;br /&gt;
| 12 || April 15 || Foundries ||&lt;br /&gt;
|-&lt;br /&gt;
| 13 || April 22 || GDCs, Ingarden, the State ||&lt;br /&gt;
|-&lt;br /&gt;
| 14 || April 29 || Synchronous question answer session ||&lt;br /&gt;
|-&lt;br /&gt;
| 15 || May 5 || Synchronous question/answer session ||&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Material for the course will be based on the following BFO tutorials, supplemented by documentation of more recent developments: &lt;br /&gt;
&lt;br /&gt;
*[https://www.youtube.com/watch?v=muafRW0bXgw&amp;amp;list=PLyngZgIl3WTj6tWcypTLpCnYXu6o93kD4&amp;amp;pp=gAQB 2015 Tutorials and Earlier Material]&lt;br /&gt;
&lt;br /&gt;
*[https://www.youtube.com/playlist?list=PLyngZgIl3WTg5f36E7r3W5px_58OOWE5I 2019 Tutorial]&lt;br /&gt;
&lt;br /&gt;
*[https://www.youtube.com/watch?v=YsdcH-yYkTI&amp;amp;list=PLyngZgIl3WThebVwYfCphjx85NOGN8BJD 2025 and Other Recent Tutorials]&lt;br /&gt;
&lt;br /&gt;
Revised versions of this tutorial material will be divided into 14 single-hour lectures which will be made available &#039;&#039;&#039;asynchronously&#039;&#039;&#039;. The lectures will form the basis for &#039;&#039;&#039;synchronous&#039;&#039;&#039; weekly working sessions tentatively scheduled for Wednesdays at 7-8pm. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Grading&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Students will be graded on the basis of &lt;br /&gt;
#Working sessions (50%) their contributions to these working sessions, which will be designed to lead to the creation of online content, summarizing aspects of BFO and of how BFO is used, that is suitable for distribution to a wider audience. &lt;br /&gt;
#Final (50%) &#039;&#039;&#039;synchronous&#039;&#039;&#039; session, based on questions assembled by students over the course of the semester, as follows: &lt;br /&gt;
##For each asynchronous session each student should prepare exactly one single-sentence question relating to the content of this session. The answer to this question should not be contained in the video content for this session. All questions should be sent in a single email to ifomis@gmail.com on April 30. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Issues to be addressed include: &lt;br /&gt;
&lt;br /&gt;
:Reviews of BFO coding using LLMs &lt;br /&gt;
:Formulating a response to [https://buffalo.box.com/v/BFO-Expert-Coding BFO Expert Coding Challenge] - [https://scholar.google.com/scholar?cites=53308182319355410&amp;amp;as_sdt=5,33&amp;amp;sciodt=0,33&amp;amp;hl=en Citations]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Background reading&#039;&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
[https://www.iso.org/standard/74572.html ISO standard]&lt;br /&gt;
&lt;br /&gt;
[https://buffalo.box.com/s/3pyas5wwfwd2bgbe5o2dz36kncm9z5gf Building Ontologies with Basic Formal Ontology (MIT Press, 2015)]&lt;/div&gt;</summary>
		<author><name>Phismith</name></author>
	</entry>
	<entry>
		<id>https://ncorwiki.buffalo.edu/index.php?title=BFO-Intro&amp;diff=75506</id>
		<title>BFO-Intro</title>
		<link rel="alternate" type="text/html" href="https://ncorwiki.buffalo.edu/index.php?title=BFO-Intro&amp;diff=75506"/>
		<updated>2025-12-28T16:59:24Z</updated>

		<summary type="html">&lt;p&gt;Phismith: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;PHI 637 Introduction to Basic Formal Ontology&lt;br /&gt;
&lt;br /&gt;
Dr. Barry Smith&lt;br /&gt;
&lt;br /&gt;
ONLINE, HYBRID, TWO CREDIT COURSE&lt;br /&gt;
&lt;br /&gt;
This course will present an introduction to Basic Formal Ontology (BFO), which is a widely used top-level ontology, approved in 2021 as international standard (ISO/IEC 21838-2).&lt;br /&gt;
&lt;br /&gt;
Note that March 18 is Spring recess&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! # !! Date !! Topics || Related links&lt;br /&gt;
|-&lt;br /&gt;
| 1 || January 21 || market, this course STEM/Phil, history of BFO, iSO, Mil, DHS mention || www.youtube.com/@basicformalontology470&lt;br /&gt;
|-&lt;br /&gt;
| 2 || January 28 || top-level vs domain ontologies; top of BFO ||&lt;br /&gt;
|-&lt;br /&gt;
| 3 || February 4 || specific dependence, realizables ||&lt;br /&gt;
|-&lt;br /&gt;
| 4 || February 11 || material entities; object aggregates, sites, boundaries ||&lt;br /&gt;
|-&lt;br /&gt;
| 5 || February 18 || realizables , functions || https://www.youtube.com/watch?v=fkkWkTIxrNQ ||&lt;br /&gt;
|-&lt;br /&gt;
| 6 || February 25 || social wholes, dispositions and roles ||&lt;br /&gt;
|-&lt;br /&gt;
| 7 || March 4 || relations, temporalized relations ||&lt;br /&gt;
|-&lt;br /&gt;
| 8 || March 11 || processes, process profiles, changes, || last part of https://www.youtube.com/watch?v=7sbzF9p7qvk ||&lt;br /&gt;
|-&lt;br /&gt;
| 9 || March 25 || IAO, language #42 ||&lt;br /&gt;
|-&lt;br /&gt;
| 10 || April 1 || BFO 101 || https://www.youtube.com/watch?v=7sbzF9p7qvk ||&lt;br /&gt;
|-&lt;br /&gt;
| 11 || April 8 || DOLCE PSS || https://www.youtube.com/watch?v=XTVR7k63_VA ||&lt;br /&gt;
|-&lt;br /&gt;
| 12 || April 15 || Foundries ||&lt;br /&gt;
|-&lt;br /&gt;
| 13 || April 22 || GDCs, Ingarden, the State&lt;br /&gt;
|-&lt;br /&gt;
| 14 || April 29 || Synchronous question answer session&lt;br /&gt;
|-&lt;br /&gt;
| 15 || May 5 || Synchronous question/answer session&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Material for the course will be based on the following BFO tutorials, supplemented by documentation of more recent developments: &lt;br /&gt;
&lt;br /&gt;
*[https://www.youtube.com/watch?v=muafRW0bXgw&amp;amp;list=PLyngZgIl3WTj6tWcypTLpCnYXu6o93kD4&amp;amp;pp=gAQB 2015 Tutorials and Earlier Material]&lt;br /&gt;
&lt;br /&gt;
*[https://www.youtube.com/playlist?list=PLyngZgIl3WTg5f36E7r3W5px_58OOWE5I 2019 Tutorial]&lt;br /&gt;
&lt;br /&gt;
*[https://www.youtube.com/watch?v=YsdcH-yYkTI&amp;amp;list=PLyngZgIl3WThebVwYfCphjx85NOGN8BJD 2025 and Other Recent Tutorials]&lt;br /&gt;
&lt;br /&gt;
Revised versions of this tutorial material will be divided into 14 single-hour lectures which will be made available &#039;&#039;&#039;asynchronously&#039;&#039;&#039;. The lectures will form the basis for &#039;&#039;&#039;synchronous&#039;&#039;&#039; weekly working sessions tentatively scheduled for Wednesdays at 7-8pm. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Grading&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Students will be graded on the basis of &lt;br /&gt;
#Working sessions (50%) their contributions to these working sessions, which will be designed to lead to the creation of online content, summarizing aspects of BFO and of how BFO is used, that is suitable for distribution to a wider audience. &lt;br /&gt;
#Final (50%) &#039;&#039;&#039;synchronous&#039;&#039;&#039; session, based on questions assembled by students over the course of the semester, as follows: &lt;br /&gt;
##For each asynchronous session each student should prepare exactly one single-sentence question relating to the content of this session. The answer to this question should not be contained in the video content for this session. All questions should be sent in a single email to ifomis@gmail.com on April 30. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Issues to be addressed include: &lt;br /&gt;
&lt;br /&gt;
:Reviews of BFO coding using LLMs &lt;br /&gt;
:Formulating a response to [https://buffalo.box.com/v/BFO-Expert-Coding BFO Expert Coding Challenge] - [https://scholar.google.com/scholar?cites=53308182319355410&amp;amp;as_sdt=5,33&amp;amp;sciodt=0,33&amp;amp;hl=en Citations]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Background reading&#039;&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
[https://www.iso.org/standard/74572.html ISO standard]&lt;br /&gt;
&lt;br /&gt;
[https://buffalo.box.com/s/3pyas5wwfwd2bgbe5o2dz36kncm9z5gf Building Ontologies with Basic Formal Ontology (MIT Press, 2015)]&lt;/div&gt;</summary>
		<author><name>Phismith</name></author>
	</entry>
	<entry>
		<id>https://ncorwiki.buffalo.edu/index.php?title=BFO-Intro&amp;diff=75505</id>
		<title>BFO-Intro</title>
		<link rel="alternate" type="text/html" href="https://ncorwiki.buffalo.edu/index.php?title=BFO-Intro&amp;diff=75505"/>
		<updated>2025-12-28T16:56:32Z</updated>

		<summary type="html">&lt;p&gt;Phismith: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;PHI 637 Introduction to Basic Formal Ontology&lt;br /&gt;
&lt;br /&gt;
Dr. Barry Smith&lt;br /&gt;
&lt;br /&gt;
ONLINE, HYBRID, TWO CREDIT COURSE&lt;br /&gt;
&lt;br /&gt;
This course will present an introduction to Basic Formal Ontology (BFO), which is a widely used top-level ontology, approved in 2021 as international standard (ISO/IEC 21838-2).&lt;br /&gt;
&lt;br /&gt;
Note that March 18 is Spring recess&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! # !! Date !! Topics || Related links&lt;br /&gt;
|-&lt;br /&gt;
| 1 || January 21 || market, this course STEM/Phil, history of BFO, iSO, Mil, DHS mention || www.youtube.com/@basicformalontology470&lt;br /&gt;
&lt;br /&gt;
IAO, language #42&lt;br /&gt;
BFO 101 https://www.youtube.com/watch?v=7sbzF9p7qvk&lt;br /&gt;
DOLCE PSS https://www.youtube.com/watch?v=XTVR7k63_VA&lt;br /&gt;
Foundries&lt;br /&gt;
GDCs, Ingarden, the State&lt;br /&gt;
Synchronous - answering questions&lt;br /&gt;
Synchronous 2 - answering questions&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| 2 || January 28 || top-level vs domain ontologies; top of BFO ||&lt;br /&gt;
|-&lt;br /&gt;
| 3 || February 4 || specific dependence, realizables ||&lt;br /&gt;
|-&lt;br /&gt;
| 4 || February 11 || material entities; object aggregates, sites, boundaries ||&lt;br /&gt;
|-&lt;br /&gt;
| 5 || February 18 || realizables , functions || https://www.youtube.com/watch?v=fkkWkTIxrNQ ||&lt;br /&gt;
|-&lt;br /&gt;
| 6 || February 25 || social wholes, dispositions and roles ||&lt;br /&gt;
|-&lt;br /&gt;
| 7 || March 4 || relations, temporalized relations ||&lt;br /&gt;
|-&lt;br /&gt;
| 8 || March 11 || processes, process profiles, changes, || last part of https://www.youtube.com/watch?v=7sbzF9p7qvk ||&lt;br /&gt;
|-&lt;br /&gt;
| 9 || March 25 ||&lt;br /&gt;
|-&lt;br /&gt;
| 10 || April 1 ||&lt;br /&gt;
|-&lt;br /&gt;
| 11 || April 8 ||&lt;br /&gt;
|-&lt;br /&gt;
| 12 || April 15 ||&lt;br /&gt;
|-&lt;br /&gt;
| 13 || April 22 ||&lt;br /&gt;
|-&lt;br /&gt;
| 14 || April 29 ||&lt;br /&gt;
|-&lt;br /&gt;
| 15 || May 5 || Synchronous final exam (question/answer) session&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
#January 21&lt;br /&gt;
#January 28&lt;br /&gt;
#February 4&lt;br /&gt;
#February 11&lt;br /&gt;
#February 18&lt;br /&gt;
#February 25&lt;br /&gt;
#March 4&lt;br /&gt;
#March 11 (note that March 18 is Spring recess)&lt;br /&gt;
#March 25&lt;br /&gt;
#April 1&lt;br /&gt;
#April 8&lt;br /&gt;
#April 15&lt;br /&gt;
#April 22&lt;br /&gt;
#April 29&lt;br /&gt;
#May 5 Synchronous final exam (question/answer) session&lt;br /&gt;
&lt;br /&gt;
Material for the course will be based on the following BFO tutorials, supplemented by documentation of more recent developments: &lt;br /&gt;
&lt;br /&gt;
*[https://www.youtube.com/watch?v=muafRW0bXgw&amp;amp;list=PLyngZgIl3WTj6tWcypTLpCnYXu6o93kD4&amp;amp;pp=gAQB 2015 Tutorials and Earlier Material]&lt;br /&gt;
&lt;br /&gt;
*[https://www.youtube.com/playlist?list=PLyngZgIl3WTg5f36E7r3W5px_58OOWE5I 2019 Tutorial]&lt;br /&gt;
&lt;br /&gt;
*[https://www.youtube.com/watch?v=YsdcH-yYkTI&amp;amp;list=PLyngZgIl3WThebVwYfCphjx85NOGN8BJD 2025 and Other Recent Tutorials]&lt;br /&gt;
&lt;br /&gt;
Revised versions of this tutorial material will be divided into 14 single-hour lectures which will be made available &#039;&#039;&#039;asynchronously&#039;&#039;&#039;. The lectures will form the basis for &#039;&#039;&#039;synchronous&#039;&#039;&#039; weekly working sessions tentatively scheduled for Wednesdays at 7-8pm. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Grading&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Students will be graded on the basis of &lt;br /&gt;
#Working sessions (50%) their contributions to these working sessions, which will be designed to lead to the creation of online content, summarizing aspects of BFO and of how BFO is used, that is suitable for distribution to a wider audience. &lt;br /&gt;
#Final (50%) &#039;&#039;&#039;synchronous&#039;&#039;&#039; session, based on questions assembled by students over the course of the semester, as follows: &lt;br /&gt;
##For each asynchronous session each student should prepare exactly one single-sentence question relating to the content of this session. The answer to this question should not be contained in the video content for this session. All questions should be sent in a single email to ifomis@gmail.com on April 30. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Issues to be addressed include: &lt;br /&gt;
&lt;br /&gt;
:Reviews of BFO coding using LLMs &lt;br /&gt;
:Formulating a response to [https://buffalo.box.com/v/BFO-Expert-Coding BFO Expert Coding Challenge] - [https://scholar.google.com/scholar?cites=53308182319355410&amp;amp;as_sdt=5,33&amp;amp;sciodt=0,33&amp;amp;hl=en Citations]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Background reading&#039;&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
[https://www.iso.org/standard/74572.html ISO standard]&lt;br /&gt;
&lt;br /&gt;
[https://buffalo.box.com/s/3pyas5wwfwd2bgbe5o2dz36kncm9z5gf Building Ontologies with Basic Formal Ontology (MIT Press, 2015)]&lt;/div&gt;</summary>
		<author><name>Phismith</name></author>
	</entry>
	<entry>
		<id>https://ncorwiki.buffalo.edu/index.php?title=BFO-Intro&amp;diff=75504</id>
		<title>BFO-Intro</title>
		<link rel="alternate" type="text/html" href="https://ncorwiki.buffalo.edu/index.php?title=BFO-Intro&amp;diff=75504"/>
		<updated>2025-12-28T16:55:26Z</updated>

		<summary type="html">&lt;p&gt;Phismith: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;PHI 637 Introduction to Basic Formal Ontology&lt;br /&gt;
&lt;br /&gt;
Dr. Barry Smith&lt;br /&gt;
&lt;br /&gt;
ONLINE, HYBRID, TWO CREDIT COURSE&lt;br /&gt;
&lt;br /&gt;
This course will present an introduction to Basic Formal Ontology (BFO), which is a widely used top-level ontology, approved in 2021 as international standard (ISO/IEC 21838-2).&lt;br /&gt;
&lt;br /&gt;
Note that March 18 is Spring recess&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! # !! Date !! Topics || Related links&lt;br /&gt;
|-&lt;br /&gt;
| 1 || January 21 || market, this course STEM/Phil, history of BFO, iSO, Mil, DHS mention || www.youtube.com/@basicformalontology470&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
social wholes, dispositions and roles&lt;br /&gt;
relations, temporalized relations&lt;br /&gt;
processes, process profiles, changes, last part of https://www.youtube.com/watch?v=7sbzF9p7qvk&lt;br /&gt;
IAO, language #42&lt;br /&gt;
BFO 101 https://www.youtube.com/watch?v=7sbzF9p7qvk&lt;br /&gt;
DOLCE PSS https://www.youtube.com/watch?v=XTVR7k63_VA&lt;br /&gt;
Foundries&lt;br /&gt;
GDCs, Ingarden, the State&lt;br /&gt;
Synchronous - answering questions&lt;br /&gt;
Synchronous 2 - answering questions&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| 2 || January 28 || top-level vs domain ontologies; top of BFO ||&lt;br /&gt;
|-&lt;br /&gt;
| 3 || February 4 || specific dependence, realizables ||&lt;br /&gt;
|-&lt;br /&gt;
| 4 || February 11 || material entities; object aggregates, sites, boundaries ||&lt;br /&gt;
|-&lt;br /&gt;
| 5 || February 18 || realizables , functions || https://www.youtube.com/watch?v=fkkWkTIxrNQ ||&lt;br /&gt;
|-&lt;br /&gt;
| 6 || February 25 ||&lt;br /&gt;
|-&lt;br /&gt;
| 7 || March 4 ||&lt;br /&gt;
|-&lt;br /&gt;
| 8 || March 11 || &lt;br /&gt;
|-&lt;br /&gt;
| 9 || March 25 ||&lt;br /&gt;
|-&lt;br /&gt;
| 10 || April 1 ||&lt;br /&gt;
|-&lt;br /&gt;
| 11 || April 8 ||&lt;br /&gt;
|-&lt;br /&gt;
| 12 || April 15 ||&lt;br /&gt;
|-&lt;br /&gt;
| 13 || April 22 ||&lt;br /&gt;
|-&lt;br /&gt;
| 14 || April 29 ||&lt;br /&gt;
|-&lt;br /&gt;
| 15 || May 5 || Synchronous final exam (question/answer) session&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
#January 21&lt;br /&gt;
#January 28&lt;br /&gt;
#February 4&lt;br /&gt;
#February 11&lt;br /&gt;
#February 18&lt;br /&gt;
#February 25&lt;br /&gt;
#March 4&lt;br /&gt;
#March 11 (note that March 18 is Spring recess)&lt;br /&gt;
#March 25&lt;br /&gt;
#April 1&lt;br /&gt;
#April 8&lt;br /&gt;
#April 15&lt;br /&gt;
#April 22&lt;br /&gt;
#April 29&lt;br /&gt;
#May 5 Synchronous final exam (question/answer) session&lt;br /&gt;
&lt;br /&gt;
Material for the course will be based on the following BFO tutorials, supplemented by documentation of more recent developments: &lt;br /&gt;
&lt;br /&gt;
*[https://www.youtube.com/watch?v=muafRW0bXgw&amp;amp;list=PLyngZgIl3WTj6tWcypTLpCnYXu6o93kD4&amp;amp;pp=gAQB 2015 Tutorials and Earlier Material]&lt;br /&gt;
&lt;br /&gt;
*[https://www.youtube.com/playlist?list=PLyngZgIl3WTg5f36E7r3W5px_58OOWE5I 2019 Tutorial]&lt;br /&gt;
&lt;br /&gt;
*[https://www.youtube.com/watch?v=YsdcH-yYkTI&amp;amp;list=PLyngZgIl3WThebVwYfCphjx85NOGN8BJD 2025 and Other Recent Tutorials]&lt;br /&gt;
&lt;br /&gt;
Revised versions of this tutorial material will be divided into 14 single-hour lectures which will be made available &#039;&#039;&#039;asynchronously&#039;&#039;&#039;. The lectures will form the basis for &#039;&#039;&#039;synchronous&#039;&#039;&#039; weekly working sessions tentatively scheduled for Wednesdays at 7-8pm. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Grading&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Students will be graded on the basis of &lt;br /&gt;
#Working sessions (50%) their contributions to these working sessions, which will be designed to lead to the creation of online content, summarizing aspects of BFO and of how BFO is used, that is suitable for distribution to a wider audience. &lt;br /&gt;
#Final (50%) &#039;&#039;&#039;synchronous&#039;&#039;&#039; session, based on questions assembled by students over the course of the semester, as follows: &lt;br /&gt;
##For each asynchronous session each student should prepare exactly one single-sentence question relating to the content of this session. The answer to this question should not be contained in the video content for this session. All questions should be sent in a single email to ifomis@gmail.com on April 30. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Issues to be addressed include: &lt;br /&gt;
&lt;br /&gt;
:Reviews of BFO coding using LLMs &lt;br /&gt;
:Formulating a response to [https://buffalo.box.com/v/BFO-Expert-Coding BFO Expert Coding Challenge] - [https://scholar.google.com/scholar?cites=53308182319355410&amp;amp;as_sdt=5,33&amp;amp;sciodt=0,33&amp;amp;hl=en Citations]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Background reading&#039;&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
[https://www.iso.org/standard/74572.html ISO standard]&lt;br /&gt;
&lt;br /&gt;
[https://buffalo.box.com/s/3pyas5wwfwd2bgbe5o2dz36kncm9z5gf Building Ontologies with Basic Formal Ontology (MIT Press, 2015)]&lt;/div&gt;</summary>
		<author><name>Phismith</name></author>
	</entry>
	<entry>
		<id>https://ncorwiki.buffalo.edu/index.php?title=BFO-Intro&amp;diff=75503</id>
		<title>BFO-Intro</title>
		<link rel="alternate" type="text/html" href="https://ncorwiki.buffalo.edu/index.php?title=BFO-Intro&amp;diff=75503"/>
		<updated>2025-12-28T16:53:53Z</updated>

		<summary type="html">&lt;p&gt;Phismith: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;PHI 637 Introduction to Basic Formal Ontology&lt;br /&gt;
&lt;br /&gt;
Dr. Barry Smith&lt;br /&gt;
&lt;br /&gt;
ONLINE, HYBRID, TWO CREDIT COURSE&lt;br /&gt;
&lt;br /&gt;
This course will present an introduction to Basic Formal Ontology (BFO), which is a widely used top-level ontology, approved in 2021 as international standard (ISO/IEC 21838-2).&lt;br /&gt;
&lt;br /&gt;
Note that March 18 is Spring recess&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! # !! Date !! Topics || Related links&lt;br /&gt;
|-&lt;br /&gt;
| 1 || January 21 || market, this course STEM/Phil, history of BFO, iSO, Mil, DHS mention || www.youtube.com/@basicformalontology470&lt;br /&gt;
top-level vs domain ontologies; top of BFO&lt;br /&gt;
specific dependence, realizables&lt;br /&gt;
material entities; object aggregates, sites, boundaries&lt;br /&gt;
realizables , functions , https://www.youtube.com/watch?v=fkkWkTIxrNQ&lt;br /&gt;
social wholes, dispositions and roles&lt;br /&gt;
relations, temporalized relations&lt;br /&gt;
processes, process profiles, changes, last part of https://www.youtube.com/watch?v=7sbzF9p7qvk&lt;br /&gt;
IAO, language #42&lt;br /&gt;
BFO 101 https://www.youtube.com/watch?v=7sbzF9p7qvk&lt;br /&gt;
DOLCE PSS https://www.youtube.com/watch?v=XTVR7k63_VA&lt;br /&gt;
Foundries&lt;br /&gt;
GDCs, Ingarden, the State&lt;br /&gt;
Synchronous - answering questions&lt;br /&gt;
Synchronous 2 - answering questions&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| 2 || January 28 ||&lt;br /&gt;
|-&lt;br /&gt;
| 3 || February 4 ||&lt;br /&gt;
|-&lt;br /&gt;
| 4 || February 11 ||&lt;br /&gt;
|-&lt;br /&gt;
| 5 || February 18 ||&lt;br /&gt;
|-&lt;br /&gt;
| 6 || February 25 ||&lt;br /&gt;
|-&lt;br /&gt;
| 7 || March 4 ||&lt;br /&gt;
|-&lt;br /&gt;
| 8 || March 11 || &lt;br /&gt;
|-&lt;br /&gt;
| 9 || March 25 ||&lt;br /&gt;
|-&lt;br /&gt;
| 10 || April 1 ||&lt;br /&gt;
|-&lt;br /&gt;
| 11 || April 8 ||&lt;br /&gt;
|-&lt;br /&gt;
| 12 || April 15 ||&lt;br /&gt;
|-&lt;br /&gt;
| 13 || April 22 ||&lt;br /&gt;
|-&lt;br /&gt;
| 14 || April 29 ||&lt;br /&gt;
|-&lt;br /&gt;
| 15 || May 5 || Synchronous final exam (question/answer) session&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
#January 21&lt;br /&gt;
#January 28&lt;br /&gt;
#February 4&lt;br /&gt;
#February 11&lt;br /&gt;
#February 18&lt;br /&gt;
#February 25&lt;br /&gt;
#March 4&lt;br /&gt;
#March 11 (note that March 18 is Spring recess)&lt;br /&gt;
#March 25&lt;br /&gt;
#April 1&lt;br /&gt;
#April 8&lt;br /&gt;
#April 15&lt;br /&gt;
#April 22&lt;br /&gt;
#April 29&lt;br /&gt;
#May 5 Synchronous final exam (question/answer) session&lt;br /&gt;
&lt;br /&gt;
Material for the course will be based on the following BFO tutorials, supplemented by documentation of more recent developments: &lt;br /&gt;
&lt;br /&gt;
*[https://www.youtube.com/watch?v=muafRW0bXgw&amp;amp;list=PLyngZgIl3WTj6tWcypTLpCnYXu6o93kD4&amp;amp;pp=gAQB 2015 Tutorials and Earlier Material]&lt;br /&gt;
&lt;br /&gt;
*[https://www.youtube.com/playlist?list=PLyngZgIl3WTg5f36E7r3W5px_58OOWE5I 2019 Tutorial]&lt;br /&gt;
&lt;br /&gt;
*[https://www.youtube.com/watch?v=YsdcH-yYkTI&amp;amp;list=PLyngZgIl3WThebVwYfCphjx85NOGN8BJD 2025 and Other Recent Tutorials]&lt;br /&gt;
&lt;br /&gt;
Revised versions of this tutorial material will be divided into 14 single-hour lectures which will be made available &#039;&#039;&#039;asynchronously&#039;&#039;&#039;. The lectures will form the basis for &#039;&#039;&#039;synchronous&#039;&#039;&#039; weekly working sessions tentatively scheduled for Wednesdays at 7-8pm. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Grading&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Students will be graded on the basis of &lt;br /&gt;
#Working sessions (50%) their contributions to these working sessions, which will be designed to lead to the creation of online content, summarizing aspects of BFO and of how BFO is used, that is suitable for distribution to a wider audience. &lt;br /&gt;
#Final (50%) &#039;&#039;&#039;synchronous&#039;&#039;&#039; session, based on questions assembled by students over the course of the semester, as follows: &lt;br /&gt;
##For each asynchronous session each student should prepare exactly one single-sentence question relating to the content of this session. The answer to this question should not be contained in the video content for this session. All questions should be sent in a single email to ifomis@gmail.com on April 30. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Issues to be addressed include: &lt;br /&gt;
&lt;br /&gt;
:Reviews of BFO coding using LLMs &lt;br /&gt;
:Formulating a response to [https://buffalo.box.com/v/BFO-Expert-Coding BFO Expert Coding Challenge] - [https://scholar.google.com/scholar?cites=53308182319355410&amp;amp;as_sdt=5,33&amp;amp;sciodt=0,33&amp;amp;hl=en Citations]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Background reading&#039;&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
[https://www.iso.org/standard/74572.html ISO standard]&lt;br /&gt;
&lt;br /&gt;
[https://buffalo.box.com/s/3pyas5wwfwd2bgbe5o2dz36kncm9z5gf Building Ontologies with Basic Formal Ontology (MIT Press, 2015)]&lt;/div&gt;</summary>
		<author><name>Phismith</name></author>
	</entry>
	<entry>
		<id>https://ncorwiki.buffalo.edu/index.php?title=BFO-Intro&amp;diff=75502</id>
		<title>BFO-Intro</title>
		<link rel="alternate" type="text/html" href="https://ncorwiki.buffalo.edu/index.php?title=BFO-Intro&amp;diff=75502"/>
		<updated>2025-12-28T16:53:17Z</updated>

		<summary type="html">&lt;p&gt;Phismith: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;PHI 637 Introduction to Basic Formal Ontology&lt;br /&gt;
&lt;br /&gt;
Dr. Barry Smith&lt;br /&gt;
&lt;br /&gt;
ONLINE, HYBRID, TWO CREDIT COURSE&lt;br /&gt;
&lt;br /&gt;
This course will present an introduction to Basic Formal Ontology (BFO), which is a widely used top-level ontology, approved in 2021 as international standard (ISO/IEC 21838-2).&lt;br /&gt;
&lt;br /&gt;
Note that March 18 is Spring recess&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! # !! Date !! Topics || Related links&lt;br /&gt;
|-&lt;br /&gt;
| 1 || January 21 || &lt;br /&gt;
|-&lt;br /&gt;
| 2 || January 28 ||&lt;br /&gt;
|-&lt;br /&gt;
| 3 || February 4 ||&lt;br /&gt;
|-&lt;br /&gt;
| 4 || February 11 ||&lt;br /&gt;
|-&lt;br /&gt;
| 5 || February 18 ||&lt;br /&gt;
|-&lt;br /&gt;
| 6 || February 25 ||&lt;br /&gt;
|-&lt;br /&gt;
| 7 || March 4 ||&lt;br /&gt;
|-&lt;br /&gt;
| 8 || March 11 || &lt;br /&gt;
|-&lt;br /&gt;
| 9 || March 25 ||&lt;br /&gt;
|-&lt;br /&gt;
| 10 || April 1 ||&lt;br /&gt;
|-&lt;br /&gt;
| 11 || April 8 ||&lt;br /&gt;
|-&lt;br /&gt;
| 12 || April 15 ||&lt;br /&gt;
|-&lt;br /&gt;
| 13 || April 22 ||&lt;br /&gt;
|-&lt;br /&gt;
| 14 || April 29 ||&lt;br /&gt;
|-&lt;br /&gt;
| 15 || May 5 || Synchronous final exam (question/answer) session&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
#January 21&lt;br /&gt;
#January 28&lt;br /&gt;
#February 4&lt;br /&gt;
#February 11&lt;br /&gt;
#February 18&lt;br /&gt;
#February 25&lt;br /&gt;
#March 4&lt;br /&gt;
#March 11 (note that March 18 is Spring recess)&lt;br /&gt;
#March 25&lt;br /&gt;
#April 1&lt;br /&gt;
#April 8&lt;br /&gt;
#April 15&lt;br /&gt;
#April 22&lt;br /&gt;
#April 29&lt;br /&gt;
#May 5 Synchronous final exam (question/answer) session&lt;br /&gt;
&lt;br /&gt;
Material for the course will be based on the following BFO tutorials, supplemented by documentation of more recent developments: &lt;br /&gt;
&lt;br /&gt;
*[https://www.youtube.com/watch?v=muafRW0bXgw&amp;amp;list=PLyngZgIl3WTj6tWcypTLpCnYXu6o93kD4&amp;amp;pp=gAQB 2015 Tutorials and Earlier Material]&lt;br /&gt;
&lt;br /&gt;
*[https://www.youtube.com/playlist?list=PLyngZgIl3WTg5f36E7r3W5px_58OOWE5I 2019 Tutorial]&lt;br /&gt;
&lt;br /&gt;
*[https://www.youtube.com/watch?v=YsdcH-yYkTI&amp;amp;list=PLyngZgIl3WThebVwYfCphjx85NOGN8BJD 2025 and Other Recent Tutorials]&lt;br /&gt;
&lt;br /&gt;
Revised versions of this tutorial material will be divided into 14 single-hour lectures which will be made available &#039;&#039;&#039;asynchronously&#039;&#039;&#039;. The lectures will form the basis for &#039;&#039;&#039;synchronous&#039;&#039;&#039; weekly working sessions tentatively scheduled for Wednesdays at 7-8pm. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Grading&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Students will be graded on the basis of &lt;br /&gt;
#Working sessions (50%) their contributions to these working sessions, which will be designed to lead to the creation of online content, summarizing aspects of BFO and of how BFO is used, that is suitable for distribution to a wider audience. &lt;br /&gt;
#Final (50%) &#039;&#039;&#039;synchronous&#039;&#039;&#039; session, based on questions assembled by students over the course of the semester, as follows: &lt;br /&gt;
##For each asynchronous session each student should prepare exactly one single-sentence question relating to the content of this session. The answer to this question should not be contained in the video content for this session. All questions should be sent in a single email to ifomis@gmail.com on April 30. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Issues to be addressed include: &lt;br /&gt;
&lt;br /&gt;
:Reviews of BFO coding using LLMs &lt;br /&gt;
:Formulating a response to [https://buffalo.box.com/v/BFO-Expert-Coding BFO Expert Coding Challenge] - [https://scholar.google.com/scholar?cites=53308182319355410&amp;amp;as_sdt=5,33&amp;amp;sciodt=0,33&amp;amp;hl=en Citations]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Background reading&#039;&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
[https://www.iso.org/standard/74572.html ISO standard]&lt;br /&gt;
&lt;br /&gt;
[https://buffalo.box.com/s/3pyas5wwfwd2bgbe5o2dz36kncm9z5gf Building Ontologies with Basic Formal Ontology (MIT Press, 2015)]&lt;/div&gt;</summary>
		<author><name>Phismith</name></author>
	</entry>
	<entry>
		<id>https://ncorwiki.buffalo.edu/index.php?title=BFO-Intro&amp;diff=75501</id>
		<title>BFO-Intro</title>
		<link rel="alternate" type="text/html" href="https://ncorwiki.buffalo.edu/index.php?title=BFO-Intro&amp;diff=75501"/>
		<updated>2025-12-28T16:52:58Z</updated>

		<summary type="html">&lt;p&gt;Phismith: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;PHI 637 Introduction to Basic Formal Ontology&lt;br /&gt;
&lt;br /&gt;
Dr. Barry Smith&lt;br /&gt;
&lt;br /&gt;
ONLINE, HYBRID, TWO CREDIT COURSE&lt;br /&gt;
&lt;br /&gt;
This course will present an introduction to Basic Formal Ontology (BFO), which is a widely used top-level ontology, approved in 2021 as international standard (ISO/IEC 21838-2).&lt;br /&gt;
&lt;br /&gt;
Note that March 18 is Spring recess&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! # !! Date !! Topics&lt;br /&gt;
|-&lt;br /&gt;
| 1 || January 21 || &lt;br /&gt;
|-&lt;br /&gt;
| 2 || January 28 ||&lt;br /&gt;
|-&lt;br /&gt;
| 3 || February 4 ||&lt;br /&gt;
|-&lt;br /&gt;
| 4 || February 11 ||&lt;br /&gt;
|-&lt;br /&gt;
| 5 || February 18 ||&lt;br /&gt;
|-&lt;br /&gt;
| 6 || February 25 ||&lt;br /&gt;
|-&lt;br /&gt;
| 7 || March 4 ||&lt;br /&gt;
|-&lt;br /&gt;
| 8 || March 11 || &lt;br /&gt;
|-&lt;br /&gt;
| 9 || March 25 ||&lt;br /&gt;
|-&lt;br /&gt;
| 10 || April 1 ||&lt;br /&gt;
|-&lt;br /&gt;
| 11 || April 8 ||&lt;br /&gt;
|-&lt;br /&gt;
| 12 || April 15 ||&lt;br /&gt;
|-&lt;br /&gt;
| 13 || April 22 ||&lt;br /&gt;
|-&lt;br /&gt;
| 14 || April 29 ||&lt;br /&gt;
|-&lt;br /&gt;
| 15 || May 5 || Synchronous final exam (question/answer) session&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
#January 21&lt;br /&gt;
#January 28&lt;br /&gt;
#February 4&lt;br /&gt;
#February 11&lt;br /&gt;
#February 18&lt;br /&gt;
#February 25&lt;br /&gt;
#March 4&lt;br /&gt;
#March 11 (note that March 18 is Spring recess)&lt;br /&gt;
#March 25&lt;br /&gt;
#April 1&lt;br /&gt;
#April 8&lt;br /&gt;
#April 15&lt;br /&gt;
#April 22&lt;br /&gt;
#April 29&lt;br /&gt;
#May 5 Synchronous final exam (question/answer) session&lt;br /&gt;
&lt;br /&gt;
Material for the course will be based on the following BFO tutorials, supplemented by documentation of more recent developments: &lt;br /&gt;
&lt;br /&gt;
*[https://www.youtube.com/watch?v=muafRW0bXgw&amp;amp;list=PLyngZgIl3WTj6tWcypTLpCnYXu6o93kD4&amp;amp;pp=gAQB 2015 Tutorials and Earlier Material]&lt;br /&gt;
&lt;br /&gt;
*[https://www.youtube.com/playlist?list=PLyngZgIl3WTg5f36E7r3W5px_58OOWE5I 2019 Tutorial]&lt;br /&gt;
&lt;br /&gt;
*[https://www.youtube.com/watch?v=YsdcH-yYkTI&amp;amp;list=PLyngZgIl3WThebVwYfCphjx85NOGN8BJD 2025 and Other Recent Tutorials]&lt;br /&gt;
&lt;br /&gt;
Revised versions of this tutorial material will be divided into 14 single-hour lectures which will be made available &#039;&#039;&#039;asynchronously&#039;&#039;&#039;. The lectures will form the basis for &#039;&#039;&#039;synchronous&#039;&#039;&#039; weekly working sessions tentatively scheduled for Wednesdays at 7-8pm. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Grading&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Students will be graded on the basis of &lt;br /&gt;
#Working sessions (50%) their contributions to these working sessions, which will be designed to lead to the creation of online content, summarizing aspects of BFO and of how BFO is used, that is suitable for distribution to a wider audience. &lt;br /&gt;
#Final (50%) &#039;&#039;&#039;synchronous&#039;&#039;&#039; session, based on questions assembled by students over the course of the semester, as follows: &lt;br /&gt;
##For each asynchronous session each student should prepare exactly one single-sentence question relating to the content of this session. The answer to this question should not be contained in the video content for this session. All questions should be sent in a single email to ifomis@gmail.com on April 30. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Issues to be addressed include: &lt;br /&gt;
&lt;br /&gt;
:Reviews of BFO coding using LLMs &lt;br /&gt;
:Formulating a response to [https://buffalo.box.com/v/BFO-Expert-Coding BFO Expert Coding Challenge] - [https://scholar.google.com/scholar?cites=53308182319355410&amp;amp;as_sdt=5,33&amp;amp;sciodt=0,33&amp;amp;hl=en Citations]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Background reading&#039;&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
[https://www.iso.org/standard/74572.html ISO standard]&lt;br /&gt;
&lt;br /&gt;
[https://buffalo.box.com/s/3pyas5wwfwd2bgbe5o2dz36kncm9z5gf Building Ontologies with Basic Formal Ontology (MIT Press, 2015)]&lt;/div&gt;</summary>
		<author><name>Phismith</name></author>
	</entry>
	<entry>
		<id>https://ncorwiki.buffalo.edu/index.php?title=BFO-Intro&amp;diff=75500</id>
		<title>BFO-Intro</title>
		<link rel="alternate" type="text/html" href="https://ncorwiki.buffalo.edu/index.php?title=BFO-Intro&amp;diff=75500"/>
		<updated>2025-12-28T16:50:28Z</updated>

		<summary type="html">&lt;p&gt;Phismith: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;PHI 637 Introduction to Basic Formal Ontology&lt;br /&gt;
&lt;br /&gt;
Dr. Barry Smith&lt;br /&gt;
&lt;br /&gt;
ONLINE, HYBRID, TWO CREDIT COURSE&lt;br /&gt;
&lt;br /&gt;
This course will present an introduction to Basic Formal Ontology (BFO), which is a widely used top-level ontology, approved in 2021 as international standard (ISO/IEC 21838-2).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! # !! Date !! Notes&lt;br /&gt;
|-&lt;br /&gt;
| 1 || January 21 ||&lt;br /&gt;
|-&lt;br /&gt;
| 2 || January 28 ||&lt;br /&gt;
|-&lt;br /&gt;
| 3 || February 4 ||&lt;br /&gt;
|-&lt;br /&gt;
| 4 || February 11 ||&lt;br /&gt;
|-&lt;br /&gt;
| 5 || February 18 ||&lt;br /&gt;
|-&lt;br /&gt;
| 6 || February 25 ||&lt;br /&gt;
|-&lt;br /&gt;
| 7 || March 4 ||&lt;br /&gt;
|-&lt;br /&gt;
| 8 || March 11 || March 18 is Spring recess&lt;br /&gt;
|-&lt;br /&gt;
| 9 || March 25 ||&lt;br /&gt;
|-&lt;br /&gt;
| 10 || April 1 ||&lt;br /&gt;
|-&lt;br /&gt;
| 11 || April 8 ||&lt;br /&gt;
|-&lt;br /&gt;
| 12 || April 15 ||&lt;br /&gt;
|-&lt;br /&gt;
| 13 || April 22 ||&lt;br /&gt;
|-&lt;br /&gt;
| 14 || April 29 ||&lt;br /&gt;
|-&lt;br /&gt;
| 15 || May 5 || Synchronous final exam (question/answer) session&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
#January 21&lt;br /&gt;
#January 28&lt;br /&gt;
#February 4&lt;br /&gt;
#February 11&lt;br /&gt;
#February 18&lt;br /&gt;
#February 25&lt;br /&gt;
#March 4&lt;br /&gt;
#March 11 (note that March 18 is Spring recess)&lt;br /&gt;
#March 25&lt;br /&gt;
#April 1&lt;br /&gt;
#April 8&lt;br /&gt;
#April 15&lt;br /&gt;
#April 22&lt;br /&gt;
#April 29&lt;br /&gt;
#May 5 Synchronous final exam (question/answer) session&lt;br /&gt;
&lt;br /&gt;
Material for the course will be based on the following BFO tutorials, supplemented by documentation of more recent developments: &lt;br /&gt;
&lt;br /&gt;
*[https://www.youtube.com/watch?v=muafRW0bXgw&amp;amp;list=PLyngZgIl3WTj6tWcypTLpCnYXu6o93kD4&amp;amp;pp=gAQB 2015 Tutorials and Earlier Material]&lt;br /&gt;
&lt;br /&gt;
*[https://www.youtube.com/playlist?list=PLyngZgIl3WTg5f36E7r3W5px_58OOWE5I 2019 Tutorial]&lt;br /&gt;
&lt;br /&gt;
*[https://www.youtube.com/watch?v=YsdcH-yYkTI&amp;amp;list=PLyngZgIl3WThebVwYfCphjx85NOGN8BJD 2025 and Other Recent Tutorials]&lt;br /&gt;
&lt;br /&gt;
Revised versions of this tutorial material will be divided into 14 single-hour lectures which will be made available &#039;&#039;&#039;asynchronously&#039;&#039;&#039;. The lectures will form the basis for &#039;&#039;&#039;synchronous&#039;&#039;&#039; weekly working sessions tentatively scheduled for Wednesdays at 7-8pm. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Grading&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Students will be graded on the basis of &lt;br /&gt;
#Working sessions (50%) their contributions to these working sessions, which will be designed to lead to the creation of online content, summarizing aspects of BFO and of how BFO is used, that is suitable for distribution to a wider audience. &lt;br /&gt;
#Final (50%) &#039;&#039;&#039;synchronous&#039;&#039;&#039; session, based on questions assembled by students over the course of the semester, as follows: &lt;br /&gt;
##For each asynchronous session each student should prepare exactly one single-sentence question relating to the content of this session. The answer to this question should not be contained in the video content for this session. All questions should be sent in a single email to ifomis@gmail.com on April 30. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Issues to be addressed include: &lt;br /&gt;
&lt;br /&gt;
:Reviews of BFO coding using LLMs &lt;br /&gt;
:Formulating a response to [https://buffalo.box.com/v/BFO-Expert-Coding BFO Expert Coding Challenge] - [https://scholar.google.com/scholar?cites=53308182319355410&amp;amp;as_sdt=5,33&amp;amp;sciodt=0,33&amp;amp;hl=en Citations]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Background reading&#039;&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
[https://www.iso.org/standard/74572.html ISO standard]&lt;br /&gt;
&lt;br /&gt;
[https://buffalo.box.com/s/3pyas5wwfwd2bgbe5o2dz36kncm9z5gf Building Ontologies with Basic Formal Ontology (MIT Press, 2015)]&lt;/div&gt;</summary>
		<author><name>Phismith</name></author>
	</entry>
	<entry>
		<id>https://ncorwiki.buffalo.edu/index.php?title=BFO-Intro&amp;diff=75499</id>
		<title>BFO-Intro</title>
		<link rel="alternate" type="text/html" href="https://ncorwiki.buffalo.edu/index.php?title=BFO-Intro&amp;diff=75499"/>
		<updated>2025-12-28T14:37:03Z</updated>

		<summary type="html">&lt;p&gt;Phismith: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;PHI 637 Introduction to Basic Formal Ontology&lt;br /&gt;
&lt;br /&gt;
Dr. Barry Smith&lt;br /&gt;
&lt;br /&gt;
ONLINE, HYBRID, TWO CREDIT COURSE&lt;br /&gt;
&lt;br /&gt;
This course will present an introduction to Basic Formal Ontology (BFO), which is a widely used top-level ontology, approved in 2021 as international standard (ISO/IEC 21838-2).&lt;br /&gt;
&lt;br /&gt;
#January 21&lt;br /&gt;
#January 28&lt;br /&gt;
#February 4&lt;br /&gt;
#February 11&lt;br /&gt;
#February 18&lt;br /&gt;
#February 25&lt;br /&gt;
#March 4&lt;br /&gt;
#March 11 (note that March 18 is Spring recess)&lt;br /&gt;
#March 25&lt;br /&gt;
#April 1&lt;br /&gt;
#April 8&lt;br /&gt;
#April 15&lt;br /&gt;
#April 22&lt;br /&gt;
#April 29&lt;br /&gt;
#May 5 Synchronous final exam (question/answer) session&lt;br /&gt;
&lt;br /&gt;
Material for the course will be based on the following BFO tutorials, supplemented by documentation of more recent developments: &lt;br /&gt;
&lt;br /&gt;
*[https://www.youtube.com/watch?v=muafRW0bXgw&amp;amp;list=PLyngZgIl3WTj6tWcypTLpCnYXu6o93kD4&amp;amp;pp=gAQB 2015 Tutorials and Earlier Material]&lt;br /&gt;
&lt;br /&gt;
*[https://www.youtube.com/playlist?list=PLyngZgIl3WTg5f36E7r3W5px_58OOWE5I 2019 Tutorial]&lt;br /&gt;
&lt;br /&gt;
*[https://www.youtube.com/watch?v=YsdcH-yYkTI&amp;amp;list=PLyngZgIl3WThebVwYfCphjx85NOGN8BJD 2025 and Other Recent Tutorials]&lt;br /&gt;
&lt;br /&gt;
Revised versions of this tutorial material will be divided into 14 single-hour lectures which will be made available &#039;&#039;&#039;asynchronously&#039;&#039;&#039;. The lectures will form the basis for &#039;&#039;&#039;synchronous&#039;&#039;&#039; weekly working sessions tentatively scheduled for Wednesdays at 7-8pm. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Grading&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Students will be graded on the basis of &lt;br /&gt;
#Working sessions (50%) their contributions to these working sessions, which will be designed to lead to the creation of online content, summarizing aspects of BFO and of how BFO is used, that is suitable for distribution to a wider audience. &lt;br /&gt;
#Final (50%) &#039;&#039;&#039;synchronous&#039;&#039;&#039; session, based on questions assembled by students over the course of the semester, as follows: &lt;br /&gt;
##For each asynchronous session each student should prepare exactly one single-sentence question relating to the content of this session. The answer to this question should not be contained in the video content for this session. All questions should be sent in a single email to ifomis@gmail.com on April 30. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Issues to be addressed include: &lt;br /&gt;
&lt;br /&gt;
:Reviews of BFO coding using LLMs &lt;br /&gt;
:Formulating a response to [https://buffalo.box.com/v/BFO-Expert-Coding BFO Expert Coding Challenge] - [https://scholar.google.com/scholar?cites=53308182319355410&amp;amp;as_sdt=5,33&amp;amp;sciodt=0,33&amp;amp;hl=en Citations]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Background reading&#039;&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
[https://www.iso.org/standard/74572.html ISO standard]&lt;br /&gt;
&lt;br /&gt;
[https://buffalo.box.com/s/3pyas5wwfwd2bgbe5o2dz36kncm9z5gf Building Ontologies with Basic Formal Ontology (MIT Press, 2015)]&lt;/div&gt;</summary>
		<author><name>Phismith</name></author>
	</entry>
	<entry>
		<id>https://ncorwiki.buffalo.edu/index.php?title=BFO-Intro&amp;diff=75498</id>
		<title>BFO-Intro</title>
		<link rel="alternate" type="text/html" href="https://ncorwiki.buffalo.edu/index.php?title=BFO-Intro&amp;diff=75498"/>
		<updated>2025-12-28T14:36:28Z</updated>

		<summary type="html">&lt;p&gt;Phismith: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;PHI 637 Introduction to Basic Formal Ontology&lt;br /&gt;
&lt;br /&gt;
Dr. Barry Smith&lt;br /&gt;
&lt;br /&gt;
ONLINE, HYBRID, TWO CREDIT COURSE&lt;br /&gt;
&lt;br /&gt;
This course will present an introduction to Basic Formal Ontology (BFO), which is a widely used top-level ontology, approved in 2021 as international standard (ISO/IEC 21838-2).&lt;br /&gt;
&lt;br /&gt;
#January 21&lt;br /&gt;
#January 28&lt;br /&gt;
#February 4&lt;br /&gt;
#February 11&lt;br /&gt;
#February 18&lt;br /&gt;
#February 25&lt;br /&gt;
#March 4&lt;br /&gt;
#March 11&lt;br /&gt;
##March 18 Spring recess&lt;br /&gt;
#March 25&lt;br /&gt;
#April 1&lt;br /&gt;
#April 8&lt;br /&gt;
#April 15&lt;br /&gt;
#April 22&lt;br /&gt;
#April 29&lt;br /&gt;
#May 5 Synchronous final exam (question/answer) session&lt;br /&gt;
&lt;br /&gt;
Material for the course will be based on the following BFO tutorials, supplemented by documentation of more recent developments: &lt;br /&gt;
&lt;br /&gt;
*[https://www.youtube.com/watch?v=muafRW0bXgw&amp;amp;list=PLyngZgIl3WTj6tWcypTLpCnYXu6o93kD4&amp;amp;pp=gAQB 2015 Tutorials and Earlier Material]&lt;br /&gt;
&lt;br /&gt;
*[https://www.youtube.com/playlist?list=PLyngZgIl3WTg5f36E7r3W5px_58OOWE5I 2019 Tutorial]&lt;br /&gt;
&lt;br /&gt;
*[https://www.youtube.com/watch?v=YsdcH-yYkTI&amp;amp;list=PLyngZgIl3WThebVwYfCphjx85NOGN8BJD 2025 and Other Recent Tutorials]&lt;br /&gt;
&lt;br /&gt;
Revised versions of this tutorial material will be divided into 14 single-hour lectures which will be made available &#039;&#039;&#039;asynchronously&#039;&#039;&#039;. The lectures will form the basis for &#039;&#039;&#039;synchronous&#039;&#039;&#039; weekly working sessions tentatively scheduled for Wednesdays at 7-8pm. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Grading&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Students will be graded on the basis of &lt;br /&gt;
#Working sessions (50%) their contributions to these working sessions, which will be designed to lead to the creation of online content, summarizing aspects of BFO and of how BFO is used, that is suitable for distribution to a wider audience. &lt;br /&gt;
#Final (50%) &#039;&#039;&#039;synchronous&#039;&#039;&#039; session, based on questions assembled by students over the course of the semester, as follows: &lt;br /&gt;
##For each asynchronous session each student should prepare exactly one single-sentence question relating to the content of this session. The answer to this question should not be contained in the video content for this session. All questions should be sent in a single email to ifomis@gmail.com on April 30. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Issues to be addressed include: &lt;br /&gt;
&lt;br /&gt;
:Reviews of BFO coding using LLMs &lt;br /&gt;
:Formulating a response to [https://buffalo.box.com/v/BFO-Expert-Coding BFO Expert Coding Challenge] - [https://scholar.google.com/scholar?cites=53308182319355410&amp;amp;as_sdt=5,33&amp;amp;sciodt=0,33&amp;amp;hl=en Citations]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Background reading&#039;&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
[https://www.iso.org/standard/74572.html ISO standard]&lt;br /&gt;
&lt;br /&gt;
[https://buffalo.box.com/s/3pyas5wwfwd2bgbe5o2dz36kncm9z5gf Building Ontologies with Basic Formal Ontology (MIT Press, 2015)]&lt;/div&gt;</summary>
		<author><name>Phismith</name></author>
	</entry>
	<entry>
		<id>https://ncorwiki.buffalo.edu/index.php?title=BFO-Intro&amp;diff=75497</id>
		<title>BFO-Intro</title>
		<link rel="alternate" type="text/html" href="https://ncorwiki.buffalo.edu/index.php?title=BFO-Intro&amp;diff=75497"/>
		<updated>2025-12-28T14:31:08Z</updated>

		<summary type="html">&lt;p&gt;Phismith: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;PHI 637 Introduction to Basic Formal Ontology&lt;br /&gt;
&lt;br /&gt;
Dr. Barry Smith&lt;br /&gt;
&lt;br /&gt;
ONLINE, HYBRID, TWO CREDIT COURSE&lt;br /&gt;
&lt;br /&gt;
This course will present an introduction to Basic Formal Ontology (BFO), which is a widely used top-level ontology, approved in 2021 as international standard (ISO/IEC 21838-2).&lt;br /&gt;
&lt;br /&gt;
#January 21&lt;br /&gt;
#January 28&lt;br /&gt;
#February 4&lt;br /&gt;
#February 11&lt;br /&gt;
#February 18&lt;br /&gt;
#February 25&lt;br /&gt;
#March 4&lt;br /&gt;
#March 11&lt;br /&gt;
##March 18 Spring recess&lt;br /&gt;
#March 25&lt;br /&gt;
#April 1&lt;br /&gt;
#April 8&lt;br /&gt;
#April 15&lt;br /&gt;
#April 22&lt;br /&gt;
#April 29&lt;br /&gt;
#May 5 Synchronous final exam (question/answer) session&lt;br /&gt;
&lt;br /&gt;
Material for the course will be based on the following BFO tutorials, supplemented by documentation of more recent developments: &lt;br /&gt;
&lt;br /&gt;
*[https://www.youtube.com/watch?v=muafRW0bXgw&amp;amp;list=PLyngZgIl3WTj6tWcypTLpCnYXu6o93kD4&amp;amp;pp=gAQB 2015 Tutorials and Earlier Material]&lt;br /&gt;
&lt;br /&gt;
*[https://www.youtube.com/playlist?list=PLyngZgIl3WTg5f36E7r3W5px_58OOWE5I 2019 Tutorial]&lt;br /&gt;
&lt;br /&gt;
*[https://www.youtube.com/watch?v=YsdcH-yYkTI&amp;amp;list=PLyngZgIl3WThebVwYfCphjx85NOGN8BJD 2025 and Other Recent Tutorials]&lt;br /&gt;
&lt;br /&gt;
Revised versions of this tutorial material will be divided into 14 single-hour lectures which will be made available &#039;&#039;&#039;asynchronously&#039;&#039;&#039;. The lectures will form the basis for &#039;&#039;&#039;synchronous&#039;&#039;&#039; weekly working sessions tentatively scheduled for Wednesdays at 7-8pm. Students will be graded on the basis of their contributions to these working sessions, which will be designed to lead to the creation of online content, summarizing aspects of BFO and of how BFO is used, that is suitable for distribution to a wider audience. &lt;br /&gt;
&lt;br /&gt;
Exam: The final &#039;&#039;&#039;synchronous session&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Issues to be addressed include: &lt;br /&gt;
&lt;br /&gt;
:Reviews of BFO coding using LLMs &lt;br /&gt;
:Formulating a response to [https://buffalo.box.com/v/BFO-Expert-Coding BFO Expert Coding Challenge] - [https://scholar.google.com/scholar?cites=53308182319355410&amp;amp;as_sdt=5,33&amp;amp;sciodt=0,33&amp;amp;hl=en Citations]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Background reading&#039;&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
[https://www.iso.org/standard/74572.html ISO standard]&lt;br /&gt;
&lt;br /&gt;
[https://buffalo.box.com/s/3pyas5wwfwd2bgbe5o2dz36kncm9z5gf Building Ontologies with Basic Formal Ontology (MIT Press, 2015)]&lt;/div&gt;</summary>
		<author><name>Phismith</name></author>
	</entry>
	<entry>
		<id>https://ncorwiki.buffalo.edu/index.php?title=BFO-Intro&amp;diff=75496</id>
		<title>BFO-Intro</title>
		<link rel="alternate" type="text/html" href="https://ncorwiki.buffalo.edu/index.php?title=BFO-Intro&amp;diff=75496"/>
		<updated>2025-12-28T14:29:18Z</updated>

		<summary type="html">&lt;p&gt;Phismith: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;PHI 637 Introduction to Basic Formal Ontology&lt;br /&gt;
&lt;br /&gt;
Dr. Barry Smith&lt;br /&gt;
&lt;br /&gt;
ONLINE, HYBRID, TWO CREDIT COURSE&lt;br /&gt;
&lt;br /&gt;
This course will present an introduction to Basic Formal Ontology (BFO), which is a widely used top-level ontology, approved in 2021 as international standard (ISO/IEC 21838-2).&lt;br /&gt;
&lt;br /&gt;
#January 21&lt;br /&gt;
#January 28&lt;br /&gt;
#February 4&lt;br /&gt;
#February 11&lt;br /&gt;
#February 18&lt;br /&gt;
#February 25&lt;br /&gt;
#March 4&lt;br /&gt;
#March 11&lt;br /&gt;
&lt;br /&gt;
March 18 Spring recess&lt;br /&gt;
&lt;br /&gt;
#March 25&lt;br /&gt;
#April 1&lt;br /&gt;
#April 8&lt;br /&gt;
#April 15&lt;br /&gt;
#April 22&lt;br /&gt;
#April 29&lt;br /&gt;
#May 5&lt;br /&gt;
&lt;br /&gt;
Material for the course will be based on the following BFO tutorials, supplemented by documentation of more recent developments: &lt;br /&gt;
&lt;br /&gt;
*[https://www.youtube.com/watch?v=muafRW0bXgw&amp;amp;list=PLyngZgIl3WTj6tWcypTLpCnYXu6o93kD4&amp;amp;pp=gAQB 2015 Tutorials and Earlier Material]&lt;br /&gt;
&lt;br /&gt;
*[https://www.youtube.com/playlist?list=PLyngZgIl3WTg5f36E7r3W5px_58OOWE5I 2019 Tutorial]&lt;br /&gt;
&lt;br /&gt;
*[https://www.youtube.com/watch?v=YsdcH-yYkTI&amp;amp;list=PLyngZgIl3WThebVwYfCphjx85NOGN8BJD 2025 and Other Recent Tutorials]&lt;br /&gt;
&lt;br /&gt;
Revised versions of this tutorial material will be divided into 14 single-hour lectures which will be made available &#039;&#039;&#039;asynchronously&#039;&#039;&#039;. The lectures will form the basis for &#039;&#039;&#039;synchronous&#039;&#039;&#039; weekly working sessions tentatively scheduled for Wednesdays at 7-8pm. Students will be graded on the basis of their contributions to these working sessions, which will be designed to lead to the creation of online content, summarizing aspects of BFO and of how BFO is used, that is suitable for distribution to a wider audience. &lt;br /&gt;
&lt;br /&gt;
Exam: The final &#039;&#039;&#039;synchronous session&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Issues to be addressed include: &lt;br /&gt;
&lt;br /&gt;
:Reviews of BFO coding using LLMs &lt;br /&gt;
:Formulating a response to [https://buffalo.box.com/v/BFO-Expert-Coding BFO Expert Coding Challenge] - [https://scholar.google.com/scholar?cites=53308182319355410&amp;amp;as_sdt=5,33&amp;amp;sciodt=0,33&amp;amp;hl=en Citations]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Background reading&#039;&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
[https://www.iso.org/standard/74572.html ISO standard]&lt;br /&gt;
&lt;br /&gt;
[https://buffalo.box.com/s/3pyas5wwfwd2bgbe5o2dz36kncm9z5gf Building Ontologies with Basic Formal Ontology (MIT Press, 2015)]&lt;/div&gt;</summary>
		<author><name>Phismith</name></author>
	</entry>
	<entry>
		<id>https://ncorwiki.buffalo.edu/index.php?title=BFO-Intro&amp;diff=75495</id>
		<title>BFO-Intro</title>
		<link rel="alternate" type="text/html" href="https://ncorwiki.buffalo.edu/index.php?title=BFO-Intro&amp;diff=75495"/>
		<updated>2025-12-28T14:28:43Z</updated>

		<summary type="html">&lt;p&gt;Phismith: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;PHI 637 Introduction to Basic Formal Ontology&lt;br /&gt;
&lt;br /&gt;
Dr. Barry Smith&lt;br /&gt;
&lt;br /&gt;
ONLINE, HYBRID, TWO CREDIT COURSE&lt;br /&gt;
&lt;br /&gt;
This course will present an introduction to Basic Formal Ontology (BFO), which is a widely used top-level ontology, approved in 2021 as international standard (ISO/IEC 21838-2).&lt;br /&gt;
&lt;br /&gt;
#January 21&lt;br /&gt;
#January 28&lt;br /&gt;
#February 4&lt;br /&gt;
#February 11&lt;br /&gt;
#February 18&lt;br /&gt;
#February 25&lt;br /&gt;
#March 4&lt;br /&gt;
#March 11&lt;br /&gt;
*March 18 Spring recess&lt;br /&gt;
#March 25&lt;br /&gt;
#April 1&lt;br /&gt;
#April 8&lt;br /&gt;
#April 15&lt;br /&gt;
#April 22&lt;br /&gt;
#April 29&lt;br /&gt;
#May 5&lt;br /&gt;
&lt;br /&gt;
Material for the course will be based on the following BFO tutorials, supplemented by documentation of more recent developments: &lt;br /&gt;
&lt;br /&gt;
*[https://www.youtube.com/watch?v=muafRW0bXgw&amp;amp;list=PLyngZgIl3WTj6tWcypTLpCnYXu6o93kD4&amp;amp;pp=gAQB 2015 Tutorials and Earlier Material]&lt;br /&gt;
&lt;br /&gt;
*[https://www.youtube.com/playlist?list=PLyngZgIl3WTg5f36E7r3W5px_58OOWE5I 2019 Tutorial]&lt;br /&gt;
&lt;br /&gt;
*[https://www.youtube.com/watch?v=YsdcH-yYkTI&amp;amp;list=PLyngZgIl3WThebVwYfCphjx85NOGN8BJD 2025 and Other Recent Tutorials]&lt;br /&gt;
&lt;br /&gt;
Revised versions of this tutorial material will be divided into 14 single-hour lectures which will be made available &#039;&#039;&#039;asynchronously&#039;&#039;&#039;. The lectures will form the basis for &#039;&#039;&#039;synchronous&#039;&#039;&#039; weekly working sessions tentatively scheduled for Wednesdays at 7-8pm. Students will be graded on the basis of their contributions to these working sessions, which will be designed to lead to the creation of online content, summarizing aspects of BFO and of how BFO is used, that is suitable for distribution to a wider audience. &lt;br /&gt;
&lt;br /&gt;
Exam: The final &#039;&#039;&#039;synchronous session&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Issues to be addressed include: &lt;br /&gt;
&lt;br /&gt;
:Reviews of BFO coding using LLMs &lt;br /&gt;
:Formulating a response to [https://buffalo.box.com/v/BFO-Expert-Coding BFO Expert Coding Challenge] - [https://scholar.google.com/scholar?cites=53308182319355410&amp;amp;as_sdt=5,33&amp;amp;sciodt=0,33&amp;amp;hl=en Citations]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Background reading&#039;&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
[https://www.iso.org/standard/74572.html ISO standard]&lt;br /&gt;
&lt;br /&gt;
[https://buffalo.box.com/s/3pyas5wwfwd2bgbe5o2dz36kncm9z5gf Building Ontologies with Basic Formal Ontology (MIT Press, 2015)]&lt;/div&gt;</summary>
		<author><name>Phismith</name></author>
	</entry>
</feed>