LM101-072: Welcome to the Big Artificial Intelligence Magic Show! (LM101-001+LM101-002 remix)

By | March 30, 2018
Robot holding a magic wand, hat, and rabbit!

LM101-072: Welcome to the Big Artificial Intelligence Magic Show! (LM101-001+LM101-002 remix)

Episode Summary:

This podcast is basically a remix of the first and second episodes of Learning Machines 101 and is intended to serve as the new introduction to the Learning Machines 101 podcast series. The search for common organizing principles which could support the foundations of machine learning and artificial intelligence is discussed and the concept of the Big Artificial Intelligence Magic Show is introduced. At the end of the podcast, the book  After Digital: Computation as Done by Brains and Machines by Professor James A. Anderson is briefly reviewed.

Show Notes:

Hello everyone! Welcome to the 72nd podcast in the podcast series Learning Machines 101. In this series of podcasts my goal is to discuss important concepts of artificial intelligence and machine learning in hopefully an entertaining and educational manner.

This podcast is basically a remix of the first and second episodes of Learning Machines 101 and is intended to serve as the new introduction to the Learning Machines 101 podcast series.

Artificial Intelligence (AI) is a field of Scientific Inquiry concerned with the problem of building systems that behave in an intelligent manner. Machine Learning is a special subfield of Artificial Intelligence which involves constructing mechanisms to support automatic learning. Machine learning algorithms are now widely used throughout society. Example applications of machine learning algorithms include: weather prediction, handwriting recognition, translation of speech to text, document search and clustering, missing data analysis, stock market predictions, detecting fraudulent transactions, voice recognition, email spam filtering, identifying computer viruses, identifying faces and places, identifying medical disorders, predicting customer purchasing preferences, signal detection, intelligent tutoring system development, and robotic control in manufacturing, vehicles, and bionics. How do these machine learning systems actually work? And what common organizing principles are shared by many of these systems which could possibly form the basis of a scientific theory of artificial intelligence and machine learning.

The world that we live in is very complex. The key idea in scientific inquiry is to attempt to identify abstractions of reality which can be scientifically investigated yet which provide important insights and advance our understanding of the complexities that surround us. In order to advance our knowledge and understanding in the field of artificial intelligence and machine, it is important that we understand core principles of artificial intelligence and machine learning through the application of these principles to “toy-problems” and “key theoretical results”. In these simplified settings, we can take apart key building blocks and put them together with more confidence which prepares us for larger-scale “real-world” applications of machine learning and artificial intelligence. This is the scientific method in action. Before we could build a spaceship which could land humans on the moon, it was necessary to have a solid understanding of the principles of spacecraft flight whose foundations involved the application of Newton’s laws of motion to point masses.

This is how science progresses. We begin by modeling simplified abstractions of reality and gradually increase our attempts to building more complex and realistic models of reality. Along the way we do the best we can through both theoretical and experimental work to try to understand how our models work. Although one might have the ultimate goal of building an an android such as the android DATA in the TV series Star Trek, a scientist interested in solving these problems will begin with studying a simpler problem which incorporates many of the key features of the original problem.

So, with these thoughts in mind, suppose that we were interested in building a complicated artificial intelligence such as an android. We might begin by asking ourselves what are some of the features of artificial intelligence which are “common” to many different types of intelligent systems. Then, once these features have been identified we might try to construct a simplified problem which incorporates many of these features and try to design a machine learning type algorithm to solve that simplified problem.

The board game Checkers is an interesting environment for exploring some of the key concepts and building blocks which are important to building artificially intelligent systems. Checkers is a board game which is much simpler in its complexity than the board games of Chess and GO yet more complex than the board game tic-tac-toe. Each player moves a playing piece on the board according to specific rules and tries to “capture” or “block” the opponents playing pieces. When one player can not make a move, then that player loses the game and the other player wins the game.

The checker game problem is interesting for several reasons. First, the particular configuration of the playing pieces on the checkerboard is analogous to a “situation” which an android such as Data who is commanding a starship. Second, selecting a particular checker piece to move is analogous to “making a decision” in the context of that situation. In the starship command problem, the average number of possible decisions at any given instant in time is virtually infinite but the in the checkers program the average number of possible decisions per situation in checkers is only about 5 moves. Third, the “goal” to win the game of checkers is not realized until multiple moves or decisions into the future (analogous to making a sequence of decisions as a starfleet commander). A starship commander might conceivably make thousands of crucial decisions over a several day time period before the outcome of those decisions is realized. The checkers playing problem is similar to the starship commander problem in the sense that the consequences of a particular decision are not understood until the distant future. But, whereas the distant future for the starship commander might be tens of thousands of decisions into the future. For a checker playing android, the distant future is probably no more than about 50 or hundred decisions into the future. That is, the number of “turns” in the checker game probably won’t exceed more than 50 or hundred turns.  And fourth, the “situation” of winning the game is very well defined in contrast to the real world where the concept of success is typically more nebulous.

Thus, the checkers game problem (although vastly less complex than the starship commander problem) shares the essential core ingredients associated with the problem of commanding a starship. Specifically, given a situation, make a decision. Furthermore, the consequences of that decision may not be apparent until after many more decisions and an extended period of time. If we could figure out how to build an android or program a computer to play checkers, we might be able to extend these basic principles to approach the more complicated problem of designing an android capable of commanding a starship.

In fact, an important breakthrough in artificial intelligence was a computer that could play checkers against very good checkers players. The computer program wasn’t able to meet national checker champions but it did compete with expert human checker players. It had this ability to make decisions about checker board situations that it had never seen before in its life and it also had the ability to learn from experience. And finally, the underlying principles that were the basis of this intelligent checker playing program involved not only logical deduction but the ability to learn from its experiences in an almost human manner. Thus, when the inventor of this program first started up the computer, the inventor played some games against the computer which learned from these experiences. As the computer played more games, it became smarter and smarter. And finally, one of the mechanisms underlying this computer program was similar to a biological mechanism that exists in human brains.

So when do you think this important breakthrough in artificial intelligence was accomplished? The development of a computer program which learned by experience by playing other humans and then became more advanced in playing checkers by actually playing the game against itself? In fact, the computer program reached the level of an expert checker player! Interestingly enough, the breakthrough in artificial intelligence occurred in 1959! The first digital computer ENIAC was introduced in the world in 1946!

So given this breakthrough, everybody was really excited and was anxious to see the next scientific advance which would come around the corner. Skeptics, however, made statements such as “Well…you built an artificially intelligent machine which could learn by experience to play checkers but checkers is a toy-problem. Real artificial intelligence won’t be achieved until you get an artificially intelligent machine to play chess!” Well eventually by the 1970s and certainly by the 1990s, chess playing machine learning algorithms had been developed. At this point in time, solving the problem of “chess” was now considered a “toy problem” bu the Skeptics noted that a machine that could play the gam “GO” would truly exhibit artificial intelligence. Within the past decade, machine learning algorithms have now been developed which play the game “GO”. Should we now consider “go” a toy-problem and state that a truly artificially intelligent system should be able to control a driverless car?

Have you ever gone to a magic store where they sell magic tricks? Typically the sales people are magicians and demo the magic but will only explain how to do something after you purchase it. One example piece of magic is the rising card trick. The magician asks you to pick a card and then it is placed back into the deck. The deck is shuffled. The magician waves his hand over the deck, and the card magically rises, it is the card chosen by the participant! The magic can be done within a few feet of the spectator in any lighting conditions, there are no threads or strings that can be seen. This seems like real magic. You are skeptical and ask to examine the deck. It is an ordinary deck of cards.

You ask the sales person if it is a hard piece of magic to learn. They say it is easy to learn but it costs $50.  You don’t want to spend the money but you imagine amazing your friends and family with some real magic. Not some cheap trick that comes for free in a cereal box. You pay the $50 which is nonrefundable since the sales person explains the cost includes the secret. The sales person then gives you an ordinary deck of cards, instructions, and a tiny piece of scotch tape. The trick works by having a little piece of scotch tape on your thumb which you press against the spectators card. You can make the card rise by moving your thumb which is attached to the card. You feel a little disappointed..maybe even cheated because You have paid $50 for a piece of scotch tape and instructions. Also the magic trick just seems like a trick now. It doesn’t seem like real magic.

Artificial intelligence and perhaps natural biological intelligence is like a magic show. We are amazed and astounded by the intellectual feats of humans, animals, and machines yet each time we learn and advance our scientific understanding of the true underlying mechanisms of these Intellectual feats we may feel that we are not implementing True artificial intelligence but rather solving an understood engineering problem. A magic trick whose secret is not known is perceived as truly magical. A smart phone or smart robot or smart checker playing program which learns from experience and can solve problems that it has never seen before is perceived as artificially intelligent. However when the secret to the magic trick is revealed…we suddenly change our mind and say that’s not real magic. Similarly, when the methodology used by an artificially intelligent checkers program is revealed we have a tendency to say that’s not real artificial intelligence. Real artificial intelligence would be more like Data on Star Trek.

This digression is important because this is what we are going to do in this podcast series. My role will be that of a magician who is revealing the secrets of how to do magic tricks. In many cases, these “secrets” will seem rather mundane but they form the foundation of a large class of important machine learning algorithms in the field of artificial intelligence.

As each podcast in the series Learning Machines 101 reveals the next secret, it is important that you appreciate how simple concepts like a piece of tape or a simple learning mechanism can generate truly amazing and astounding phenomena. Instead of being disappointed that you paid $50 for a piece of scotch tape, you should be impressed and amazed how can be accomplished with a tiny piece of scotch tape!!

So with these thoughts in mind…I would like to welcome you to the podcast series Learning Machines 101…or equivalently…. Welcome to the Big Artificial Intelligence Magic Show!

This concludes the remix of the original Episode 1 and Episode 2 in the Learning Machines 101 series.

Before ending this podcast, however, I would like to share with you an overview of a recently published book by my doctoral thesis advisor James A. Anderson titled “After Digital: Computation as Done by Brains and Machines”.

The book “After Digital: Computation as Done by Brains and Machines” provides a nice collection of historical, biological, and evolutionary arguments about the differences between biological neural architectures and computer architectures. The book begins with a discussion comparing the differences between analog and digital computation, then provides an introduction to neuroscience, eventually leading to an assessment of the current state of machine learning and what we might expect to see in the near and distant future.

The essential argument of the book is best summarized by the old joke where you have someone looking for keys under a street light and another person asks the person looking for the keys “where did you lose your keys”? And the person looking under the street light says: “I lost my keys in the dark over there but the light is much better over here!” In other words, Anderson is trying to explain that if we are looking for “Keys” (i.e., the essential “key” features of biological computation) then we need to try to remain focused on biology and try to avoid distracting solutions which might be computationally easy or sociologically popular to implement but essentially implement strategies which don’t appear consistent with the biology. His method of argument is an informal but carefully crafted historical/evolutionary argument which begins in the past and moves to the future.

Indeed, Anderson’s writing style corresponds roughly to a fascinating dinner conversation where multiple related topics are discussed. A seemingly innocent conversation thread might initially present itself as a series of interesting anecdotes. But upon reflection one realizes that a profound point has been made. In fact, Anderson plays this game at multiple levels by slipping in subtle and profoundly important arguments about the nature of biological and digital minds and computation. Thus, the book should be of interest to both novices and experts. Anderson’s writing style reminds me of kid cartoons where the main text is designed for kids but the more sophisticated subtext is designed to provide humor for the adults. Similarly, Anderson’s writes at multiple levels making the text suitable for multidisciplinary audiences who have varying levels of expertise and background in biological neuroscience and artificial intelligence.

As previously mentioned, the style of the book is very informal. Although successive chapters build upon discussions from previous chapters, the chapters are relatively self-contained essays and informal discussions. Fairly deep topics are covered in an informal and engaging way. So basically you can read the chapters in order or at random. You can read them in one sitting or multiple sittings.

This text is recommended for everyone. The general public will find this book to be a casual introduction to how computers work, how the brain works, and how biologically inspired machine learning algorithms work. Machine learning algorithm researchers will benefit from a unique historical, biological, and evolutionary perspective with insights into design choices for future machine learning architectures.

The author of After Digital: Computation as Done by Brains and Machines as previously mentioned is Professor James A. Anderson who is a Computational Modeler and Cognitive-Neuroscientist at Brown University. Professor Anderson was one of the early pioneers in the field of artificial neural networks. In the 1970s, his research provided critical seeds which encouraged the resurgence of neural network research in the mid-1980s. Since then he has continued his work in artificial neural network modeling and in the words of Cognitive-Neuroscientist and Computational Modeler Professor James McClelland from Stanford University  “[Professor Anderson has] experienced the rise and fall and rise again of neural networks”. Professor James Anderson is also the author of the book Talking Nets where he interviews many of the pioneers in the field of Neural Networks. I strongly recommend that you take a look at the book Talking Nets as well to obtain a good historical perspective on the origins of the field of Artificial Neural Networks from the perspective of the original pioneers in the field.

Thank you again for listening to this episode of Learning Machines 101! I would like to remind you also that if you are a member of the Learning Machines 101 community, please update your user profile and let me know what topics you would like me to cover in this podcast.

You can update your user profile when you receive the email newsletter by simply clicking on the: “Let us know what you want to hear”  link!

If you are not a member of the Learning Machines 101 community, you can join the community by visiting our website at: www.learningmachines101.com and you will have the opportunity to update your user profile at that time.  You can also post requests for specific topics or comments about the show in the Statistical Machine Learning Forum on Linked In.

From time to time, I will review the profiles of members of the Learning Machines 101 community and comments posted in the Statistical Machine Learning Forum on Linked In and do my best to talk about topics of interest to the members of this group!

And don’t forget to follow us on TWITTER. The twitter handle for Learning Machines 101 is “lm101talk”!

Also please visit us on ITUNES and leave a review. You can do this by going to the website: www.learningmachines101.com and then clicking on the ITUNES icon. This will be very helpful to this podcast! Thank you so much.  Your feedback and encouragement are greatly valued!

Keywords:  Machine Learning, Artificial Intelligence, Computational Neuroscience, Neural Networks

Further Reading:

  1. Marvin Minsky (1986). The Society of Mind. Simon and Schuster.
  2. James A. Anderson (2017). After Digital: Computation as Done by Brains and Machines. Oxford University Press.
  3. James A. Anderson and Edward Rosenfeld (1998). Talking Nets: An Oral History of Neural Networks. MIT Press.

Copyright Notice:
Copyright © 2014-2018 by Richard M. Golden. All rights reserved.

 

Leave a Reply

Your email address will not be published. Required fields are marked *