01:730:329 Minds, Machines, and Persons
- Instructor: Rubenstein, Ezra | Sorensen, David
01 (E. Rubenstein) Comparison of the nature of the human mind and that of complex machines. Consequences for questions about the personhood of robots.
03 (D. Sorensen)
In this course, we will study competing philosophical theories about the nature of the mind and mental phenomena—intentionality, mental representations, and consciousness—and what these theories tell us about the possibility of creating machines with minds like ours. We will begin with some of the most foundational metaphysical issues in the philosophy of mind. Then, we will examine the foundations of computational cognitive science and artificial intelligence research. Next, we will look at attempts to understand and explain mental representations naturalistically. Lastly, we will discuss the metaphysical and ethical issues surrounding the possibility of mind uploading, mind extension, and the creation of super-intelligent AI.
Here are some of the questions that we will raise and try to answer:
- What distinguishes mental phenomena from nonmental phenomena?
- What is the metaphysical relation between the mind and brain?
- Can intentionality be naturalized?
- What is consciousness? Is consciousness just a complex state of the brain or is it something more than that (e.g. a nonphysical entity or property)?
- Is it possible to build machines (e.g. digital computers) that have minds like ours?
- Are our smartphones and laptops (literally) extensions of our minds?
- In the future, will it be possible to “upload” our minds to a cloud server? Will the uploads be us or just a clone of us? Is there a relevant difference between these two options?
- If we build robots that have minds like ours, should they have the same (moral and/or legal) rights as us?
90 (B. Saad) This is a survey course on philosophical issues raised by minds, machines, and persons. Covered topics will include:
- What are persons? Could future machines be persons? Could you be uploaded to a computer?
- What is consciousness? Can science explain it? Could engineers build conscious systems?
- What is intelligence? How does a system’s intelligence bear on its moral status?
- Why does the universe appear to be designed for intelligent observers? What scientific and philosophical hypotheses, if any, does this support about the nature and origin of the universe?
- Why, given the vastness of the universe, do we appear to be the only intelligent observers?
- Are there in principle obstacles to creating human-level artificial intelligence (AI)?
- What are the potential risks and benefits of future AI systems? What should we do to ensure that AI plays a positive role in shaping the future?
- Under what conditions should we give robots rights? Is it feasible or desirable to ensure that advanced AI systems are aligned with human values? Morally speaking, should we prefer futures in which humans are replaced by certain sorts of digital minds?
- Credits: 3
- Syllabus Disclaimer: The information on this syllabus is subject to change. For up-to-date course information, please refer to the syllabus on your course site (e.g. Canvas) on the first day of class.