Category Archives: Rule-based Inference

LM101-080: Ch2: How to Represent Knowledge using Set Theory

Episode Summary: This particular podcast covers the material in Chapter 2 of my new book “Statistical Machine Learning: A unified framework” with expected publication date May 2020. In this episode we discuss Chapter 2 of my new book, which discusses how to represent knowledge using set theory notation. Chapter 2 is titled “Set Theory for Concept Modeling”. Show… Read More »

LM101-074: How to Represent Knowledge using Logical Rules (remix)

LM101-074: How to Represent Knowledge using Logical Rules (remix) Episode Summary: In this episode we will learn how to use “rules” to represent knowledge. We discuss how this works in practice and we explain how these ideas are implemented in a special architecture called the production system. The challenges of representing knowledge using rules are also discussed. Specifically,… Read More »

LM101-071: How to Model Common Sense Knowledge using First-Order Logic and Markov Logic Nets

 LM101-071: How to Model Common Sense Knowledge using First-Order Logic and Markov Logic Nets Episode Summary: In this podcast, we provide some insights into the complexity of common sense. First, we discuss the importance of building common sense into learning machines. Second, we discuss how first-order logic can be used to represent common sense knowledge. Third, we describe… Read More »

LM101-026: How to Learn Statistical Regularities (Rerun)

How to Learn Statistical Regularities using MAP and ML Estimation Episode Summary: In this rerun of Episode 10, we discuss fundamental principles of learning in statistical environments including the design of learning machines that can use prior knowledge to facilitate and guide the learning of statistical regularities. Show Notes: Hello everyone! Welcome to the tenth podcast in the… Read More »

LM101-010: How to Learn Statistical Regularities (MAP and maximum likelihood estimation)

Episode Summary: In this podcast episode, we discuss fundamental principles of learning in statistical environments including the design of learning machines that can use prior knowledge to facilitate and guide the learning of statistical regularities. Show Notes: Hello everyone! Welcome to the tenth podcast in the podcast series Learning Machines 101. In this series of podcasts my goal… Read More »

LM101-008: How to Represent Beliefs using Probability Theory

Episode Summary: This episode focusses upon how an intelligent system can represent beliefs about its environment using fuzzy measure theory. Probability theory is introduced as a special case of fuzzy measure theory which is consistent with classical laws of logical inference. Show Notes: Hello everyone! Welcome to the eighth podcast in the podcast series Learning Machines 101. In… Read More »

LM101-007: How to Reason About Uncertain Events using Fuzzy Set Theory and Fuzzy Measure Theory

Episode Summary: In real life, there is no certainty. There are always exceptions. In this episode, two methods are discussed for making inferences in uncertain environments. In fuzzy set theory, a smart machine has certain beliefs about imprecisely defined concepts. In fuzzy measure theory, a smart machine has beliefs about precisely defined concepts but some beliefs are stronger… Read More »

LM101-005: How to Decide if a Machine is Artificially Intelligent (The Turing Test)

Episode Summary: This episode we discuss the Turing Test for Artificial Intelligence which is designed to determine if the behavior of a computer is indistinguishable from the behavior of a thinking human being. The chatbot A.L.I.C.E. (Artificial Linguistic Internet Computer Entity) is interviewed and basic concepts of AIML (Artificial Intelligence Markup Language) are introduced. Show Notes: Hello everyone!… Read More »

LM101-004: Can computers think? A mathematician’s response

Episode Summary: In this episode, we explore the question of what can computers do as well as what computers can’t do using the Turing Machine argument. Specifically, we discuss the computational limits of computers and raise the question of whether such limits pertain to biological brains and other non-standard computing machines. Show Notes: Hello everyone! Welcome to the… Read More »

LM101-003: How to Represent Knowledge using Logical Rules

Episode Summary: In this episode we will learn how to use “rules” to represent knowledge. We discuss how this works in practice and we explain how these ideas are implemented in a special architecture called the production system. The challenges of representing knowledge using rules are also discussed. Specifically, these challenges include: issues of feature representation, having an… Read More »

LM101-002: How to Build a Machine that Learns to Play Checkers

Episode Summary: In this episode, we explain how to build a machine that learns to play checkers.  The solution to this problem involves several key ideas which are fundamental to building systems which are artificially intelligent. Show Notes: Hello everyone! Welcome to the second podcast in the podcast series Learning Machines 101. In this series of podcasts my… Read More »