Category Archives: Probabilistic Inference

LM101-086: Ch8: How to Learn the Probability of Infinitely Many Outcomes

Episode Summary: This 86th episode of Learning Machines 101 discusses the problem of assigning probabilities to a possibly infinite set of outcomes in a space-time continuum which characterizes our physical world. Such a set is called an “environmental event”. The machine learning algorithm uses information about the frequency of environmental events to support learning. If we want to… Read More »

LM101-071: How to Model Common Sense Knowledge using First-Order Logic and Markov Logic Nets

 LM101-071: How to Model Common Sense Knowledge using First-Order Logic and Markov Logic Nets Episode Summary: In this podcast, we provide some insights into the complexity of common sense. First, we discuss the importance of building common sense into learning machines. Second, we discuss how first-order logic can be used to represent common sense knowledge. Third, we describe… Read More »

LM101-056: How to Build Generative Latent Probabilistic Topic Models for Search Engine and Recommender System Applications

LM101-056: How to Build Generative Latent Probabilistic Topic Models for Search Engine and Recommender System Applications Episode Summary: In this episode we discuss Latent Semantic Indexing type machine learning algorithms which have a probabilistic interpretation. We explain why such a probabilistic interpretation is important and discuss how such algorithms can be used in the design of document retrieval… Read More »

LM101-055: How to Learn Statistical Regularities using MAP and Maximum Likelihood Estimation (Rerun)

LM101-055: How to Learn Statistical Regularities using MAP and ML Estimation Episode Summary: In this rerun of Episode 10, we discuss fundamental principles of learning in statistical environments including the design of learning machines that can use prior knowledge to facilitate and guide the learning of statistical regularities. Show Notes: Hello everyone! Welcome to the tenth podcast in… Read More »

LM101-043: How to Learn a Monte Carlo Markov Chain to Solve Constraint Satisfaction Problems (Rerun)

LM101-043: How to Learn a Monte Carlo Markov Chain to Solve Constraint Satisfaction Problems (Rerun of Episode 22) Welcome to the 43rd Episode of Learning Machines 101! We are currently presenting a subsequence of episodes covering the events of the recent Neural Information Processing Systems Conference. However, this week will digress with a rerun of Episode 22 which… Read More »

LM101-042: What happened at the Monte Carlo Markov Chain Inference Methods Tutorial at the 2015 Neural Information Processing Systems Conference?

LM101-042: What happened at the Monte Carlo Inference Methods Tutorial at the 2015 Neural Information Processing Systems Conference? Episode Summary: This is the second of a short subsequence of podcasts providing a summary of events associated with Dr. Golden’s recent visit to the 2015 Neural Information Processing Systems Conference. This is one of the top conferences in the… Read More »

LM101-027: How to Learn About Rare and Unseen Events (Smoothing Probabilistic Laws)[RERUN]

LM101-027: How to Learn About Rare and Unseen Events (Smoothing Probabilistic Laws)[RERUN] Episode Summary: In this podcast episode, we discuss the design of statistical learning machines which can make inferences about rare and unseen events using prior knowledge. Show Notes: Hello everyone! Welcome to a RERUN of the 11th podcast in the podcast series Learning Machines 101. In this… Read More »

LM101-026: How to Learn Statistical Regularities (Rerun)

How to Learn Statistical Regularities using MAP and ML Estimation Episode Summary: In this rerun of Episode 10, we discuss fundamental principles of learning in statistical environments including the design of learning machines that can use prior knowledge to facilitate and guide the learning of statistical regularities. Show Notes: Hello everyone! Welcome to the tenth podcast in the… Read More »

LM101-021: How to Solve Large Complex Constraint Satisfaction Problems (Monte Carlo Markov Chain)

Episode Summary: In this episode we discuss how to solve constraint satisfaction inference problems where knowledge is represented as a large unordered collection of complicated probabilistic constraints among a collection of variables. The goal of the inference process is to infer the most probable values of the unobservable variables given the observable variables. Show Notes: Hello everyone! Welcome… Read More »

LM101-011: How to Learn About Rare and Unseen Events (Smoothing Probabilistic Laws)

Episode Summary: Today we address a strange yet fundamentally important question. How do you predict the probability of something you have never seen? Or, in other words, how can we accurately estimate the probability of rare events? Show Notes: Hello everyone! Welcome to the eleventh podcast in the podcast series Learning Machines 101. In this series of podcasts… Read More »

LM101-010: How to Learn Statistical Regularities (MAP and maximum likelihood estimation)

Episode Summary: In this podcast episode, we discuss fundamental principles of learning in statistical environments including the design of learning machines that can use prior knowledge to facilitate and guide the learning of statistical regularities. Show Notes: Hello everyone! Welcome to the tenth podcast in the podcast series Learning Machines 101. In this series of podcasts my goal… Read More »

LM101-008: How to Represent Beliefs using Probability Theory

Episode Summary: This episode focusses upon how an intelligent system can represent beliefs about its environment using fuzzy measure theory. Probability theory is introduced as a special case of fuzzy measure theory which is consistent with classical laws of logical inference. Show Notes: Hello everyone! Welcome to the eighth podcast in the podcast series Learning Machines 101. In… Read More »