LM101-055: How to Learn Statistical Regularities using MAP and Maximum Likelihood Estimation (Rerun)

LM101-055: How to Learn Statistical Regularities using MAP and ML Estimation Episode Summary: In this rerun of Episode 10, we discuss fundamental principles of learning in statistical environments including the design of learning machines that can use prior knowledge to facilitate and guide the learning of statistical regularities. Show Notes: Hello everyone! Welcome to the tenth podcast in… Read More »

LM101-010: How to Learn Statistical Regularities (MAP and maximum likelihood estimation)

Episode Summary: In this podcast episode, we discuss fundamental principles of learning in statistical environments including the design of learning machines that can use prior knowledge to facilitate and guide the learning of statistical regularities. Show Notes: Hello everyone! Welcome to the tenth podcast in the podcast series Learning Machines 101. In this series of podcasts my goal… Read More »

LM101-077: How to Choose the Best Model using BIC

Episode Summary: In this episode, we explain the proper semantic interpretation of the Bayesian Information Criterion (BIC) and emphasize how this semantic interpretation is fundamentally different from AIC (Akaike Information Criterion) model selection methods. Briefly, BIC is used to estimate the probability of the training data given the probability model, while AIC is used to estimate out-of-sample prediction… Read More »

LM101-076: How To Choose the Best Model using AIC or GAIC

Episode Summary: In this episode, we explain the proper semantic interpretation of the Akaike Information Criterion (AIC) and the Generalized Akaike Information Criterion (GAIC) for the purpose of picking the best model for a given set of training data.  The precise semantic interpretation of these model selection criteria is provided, explicit assumptions are provided for the AIC and… Read More »

LM101-071: How to Model Common Sense Knowledge using First-Order Logic and Markov Logic Nets

 LM101-071: How to Model Common Sense Knowledge using First-Order Logic and Markov Logic Nets Episode Summary: In this podcast, we provide some insights into the complexity of common sense. First, we discuss the importance of building common sense into learning machines. Second, we discuss how first-order logic can be used to represent common sense knowledge. Third, we describe… Read More »

LM101-069: What Happened at the 2017 Neural Information Processing Systems Conference?

LM101-069: What Happened at the 2017 Neural Information Processing Systems Conference?   Episode Summary: This 69th episode of Learning Machines 101 provides a short overview of the 2017 Neural Information Processing Systems conference with a focus on the development of methods for teaching learning machines rather than simply training them on examples. In addition, a book review of… Read More »

LM101-058: How to Identify Hallucinating Learning Machines using Specification Analysis

LM101-058: How to Identify Hallucinating Learning Machines using Specification Analysis Episode Summary: In this 58th episode of Learning Machines 101, I’ll be discussing an important new scientific breakthrough published just last week for the first time in the journal Econometrics  in the special issue on model misspecification titled “Generalized Information Matrix Tests for Detecting Model Misspecification”. The article… Read More »

LM101-056: How to Build Generative Latent Probabilistic Topic Models for Search Engine and Recommender System Applications

LM101-056: How to Build Generative Latent Probabilistic Topic Models for Search Engine and Recommender System Applications Episode Summary: In this episode we discuss Latent Semantic Indexing type machine learning algorithms which have a probabilistic interpretation. We explain why such a probabilistic interpretation is important and discuss how such algorithms can be used in the design of document retrieval… Read More »

LM101-047: How to Build a Support Vector Machine to Classify Patterns (Rerun)

  LM101-047: How To Build a Support Vector Machine to Classify Patterns (Rerun) Episode Summary: In this RERUN of the 32nd episode of Learning Machines 101, we introduce the concept of a Support Vector Machine. We explain how to estimate the parameters of such machines to classify a pattern vector as a member of one of two categories… Read More »

LM101-046: How to Optimize Student Learning using Recurrent Neural Networks (Educational Technology)

LM101-046: How to Optimize Student Learning using Recurrent Neural Networks (Educational Technology) Episode Summary: In this episode, we briefly review Item Response Theory and Bayesian Network Theory methods for the assessment and optimization of student learning and then describe a poster presented on the first day of the Neural Information Processing Systems conference in December 2015 in Montreal… Read More »

LM101-037: How to Build a Smart Computerized Adaptive Testing Machine using Item Response Theory

LM101-037: How to Build a Smart Computerized Adaptive Testing Machine using Item Response Theory Episode Summary: In this episode, we discuss the problem of how to build a smart computerized adaptive testing machine using Item Response Theory (IRT). Suppose that you are teaching a student a particular target set of knowledge. Examples of such situations obviously occur in… Read More »

LM101-036: How to Predict the Future from the Distant Past using Recurrent Neural Networks

LM101-036: How to Predict the Future from the Distant Past using Recurrent Neural Networks Episode Summary: In this episode, we discuss the problem of predicting the future from not only recent events but also from the distant past using Recurrent Neural Networks (RNNs). A example RNN is described which learns to label images with simple sentences. A learning… Read More »

LM101-032: How To Build a Support Vector Machine to Classify Patterns

LM101-032: How To Build a Support Vector Machine to Classify Patterns Episode Summary: In this 32nd episode of Learning Machines 101, we introduce the concept of a Support Vector Machine. We explain how to estimate the parameters of such machines to classify a pattern vector as a member of one of two categories as well as identify special… Read More »

LM101-027: How to Learn About Rare and Unseen Events (Smoothing Probabilistic Laws)[RERUN]

LM101-027: How to Learn About Rare and Unseen Events (Smoothing Probabilistic Laws)[RERUN] Episode Summary: In this podcast episode, we discuss the design of statistical learning machines which can make inferences about rare and unseen events using prior knowledge. Show Notes: Hello everyone! Welcome to a RERUN of the 11th podcast in the podcast series Learning Machines 101. In this… Read More »

LM101-026: How to Learn Statistical Regularities (Rerun)

How to Learn Statistical Regularities using MAP and ML Estimation Episode Summary: In this rerun of Episode 10, we discuss fundamental principles of learning in statistical environments including the design of learning machines that can use prior knowledge to facilitate and guide the learning of statistical regularities. Show Notes: Hello everyone! Welcome to the tenth podcast in the… Read More »

LM101-015: How to Build a Machine that Can Learn Anything (The Perceptron)

Episode Summary: In this episode we describe how to build a machine that can learn any given pattern of inputs and generate any desired pattern of outputs when it is possible to do so! Show Notes: Hello everyone! Welcome to the fifteenth podcast in the podcast series Learning Machines 101. In this series of podcasts my goal is… Read More »

LM101-011: How to Learn About Rare and Unseen Events (Smoothing Probabilistic Laws)

Episode Summary: Today we address a strange yet fundamentally important question. How do you predict the probability of something you have never seen? Or, in other words, how can we accurately estimate the probability of rare events? Show Notes: Hello everyone! Welcome to the eleventh podcast in the podcast series Learning Machines 101. In this series of podcasts… Read More »

LM101-042: What happened at the Monte Carlo Markov Chain Inference Methods Tutorial at the 2015 Neural Information Processing Systems Conference?

LM101-042: What happened at the Monte Carlo Inference Methods Tutorial at the 2015 Neural Information Processing Systems Conference? Episode Summary: This is the second of a short subsequence of podcasts providing a summary of events associated with Dr. Golden’s recent visit to the 2015 Neural Information Processing Systems Conference. This is one of the top conferences in the… Read More »