Quantcast
Channel: Deep Learning - Machine Learning Techniques
Viewing all articles
Browse latest Browse all 11

Amazing Neural Network Video Demonstration

$
0
0

I recently posted an article featuring a very deep neural network in action (250 layers), see here. Each frame in the video represented one layer, with the signal propagating from one layer to the next. In the last layer, the whole space was classified, in the sense that any new observation was immediately assigned to a particular group. The groups were massively overlapping. The connection structure (the number of connections per neuron) was sparse, allowing for a large number of layers. The purpose was supervised classification.

The example discussed here, though also involving a data animation and a supervised classification problem, illustrates a different aspect of neural networks. This time, there are 5 layers. The purpose, given the picture of a shape, it to classify it (based on a training set) in one of four categories: circle, square, triangle, or unknown. Note that my classification problem also involved four classes.

Interestingly, in my case, the data was standard numerical, tabular observations (synthetic data) turned into images for easy GPU processing. Here, the non-synthetic data consists of actual images, but the video does not feature real images. The roles are reversed. Instead it features the neural network architecture in action, also showing how the signal propagates across the layers until a specific observed shape is assigned to one of the four categories. This offers a very different perspective on how a neural network classifier works: a back-end view of the operations, while my video features a front-end view.

Another difference is the use of non-linear functions in my neural network, while the example featured here relies on standard (linear) weights between connected neurons. Also, Ryan’s neural network is not sparse, quite the contrary. This explains why you need fewer layers.

About the Animation

The description below is from Ryan Chesler, the author of the video.

This animation exhibits multi-layer perceptron with dropout train on a dataset of hand drawn squares, circles, and triangles. It was made with the Python Matplotlib animation function. The code will plot any dimensioned neural network when given the input sums and weight matrices between each layer and colors the nodes based on their saturation. It will take one training example from every 25 epochs and shows the forward pass of it computing as well as the cross-entropy loss and accuracy.

The source code is on GitHub, here. As for my video, the source code is on my GitHub repository, here. My code is described in details in my new book, available here. You can also watch my video here. Below is Ryan’s video.

Ryan’s videos are on YouTube, here. Mines are also on YouTube, here. To not miss future articles and receive monthly updates, sign-up to our newsletter.

The post Amazing Neural Network Video Demonstration first appeared on Machine Learning Techniques.


Viewing all articles
Browse latest Browse all 11

Trending Articles