[MINI] Feed Forward Neural Networks
Data Skeptic24 Maalis 2017

[MINI] Feed Forward Neural Networks

Feed Forward Neural Networks

In a feed forward neural network, neurons cannot form a cycle. In this episode, we explore how such a network would be able to represent three common logical operators: OR, AND, and XOR. The XOR operation is the interesting case.

Below are the truth tables that describe each of these functions.

AND Truth Table Input 1 Input 2 Output 0 0 0 0 1 0 1 0 0 1 1 1 OR Truth Table Input 1 Input 2 Output 0 0 0 0 1 1 1 0 1 1 1 1 XOR Truth Table Input 1 Input 2 Output 0 0 0 0 1 1 1 0 1 1 1 0

The AND and OR functions should seem very intuitive. Exclusive or (XOR) if true if and only if exactly single input is 1. Could a neural network learn these mathematical functions?

Let's consider the perceptron described below. First we see the visual representation, then the Activation function , followed by the formula for calculating the output.

Can this perceptron learn the AND function?

Sure. Let and

What about OR?

Yup. Let and

An infinite number of possible solutions exist, I just picked values that hopefully seem intuitive. This is also a good example of why the bias term is important. Without it, the AND function could not be represented.

How about XOR?

No. It is not possible to represent XOR with a single layer. It requires two layers. The image below shows how it could be done with two laters.

In the above example, the weights computed for the middle hidden node capture the essence of why this works. This node activates when recieving two positive inputs, thus contributing a heavy penalty to be summed by the output node. If a single input is 1, this node will not activate.

Universal approximation theorem tells us that any continuous function can be tightly approximated using a neural network with only a single hidden layer and a finite number of neurons. With this in mind, a feed forward neural network should be adaquet for any applications. However, in practice, other network architectures and the allowance of more hidden layers are empirically motivated.

Other types neural networks have less strict structal definitions. The various ways one might relax this constraint generate other classes of neural networks that often have interesting properties. We'll get into some of these in future mini-episodes.

Check out our recent blog post on how we're using Periscope Data cohort charts.

Thanks to Periscope Data for sponsoring this episode. More about them at periscopedata.com/skeptics

Tämä jakso on lisätty Podme-palveluun avoimen RSS-syötteen kautta eikä se ole Podmen omaa tuotantoa. Siksi jakso saattaa sisältää mainontaa.

Jaksot(601)

Bird Distribution Modeling with Satbird

Bird Distribution Modeling with Satbird

This episode features an interview with Mélisande Teng, a PhD candidate at Université de Montréal. Her research lies in the intersection of remote sensing and computer vision for biodiversity monitori...

10 Syys 202439min

Ant Encounters

Ant Encounters

In this interview with author Deborah Gordon, Kyle asks questions about the mechanisms at work in an ant colony and what ants might teach us about how to build artificial intelligence. Ants are surpri...

26 Elo 202431min

Computing Toolbox

Computing Toolbox

This season it's become clear that computing skills are vital for working in the natural sciences. In this episode, we were fortunate to speak with Madlen Wilmes, co-author of the book "Computing Skil...

19 Elo 202438min

Biodiversity Monitoring

Biodiversity Monitoring

In this episode, we talked shop with Hager Radi about her biodiversity monitoring work. While biodiversity modeling may sound simple, count organisms and mark their location, there is a lot more to it...

14 Elo 202432min

Hacking the Colony

Hacking the Colony

Today, Ashay Aswale and Tony Lopez shared their work on swarm robotics and what they have learned from ants. Robotic swarms must solve the same problems that eusocial insects do. What if your pheromon...

8 Elo 202441min

Primate Poses

Primate Poses

During this season we have talked with researchers working to utilize machine learning for behavioral observations. In previous episodes, you have heard about the software people like Richard use, but...

31 Heinä 202432min

Generating 3D Animals with YouDream

Generating 3D Animals with YouDream

Generative AI can struggle to create realistic animals and 2D representations often have mistakes like extra limbs and tails. If 2D wasn't hard enough, there are researchers working on generative 3D m...

23 Heinä 20241h

Weird Communication

Weird Communication

Today, we sat down with Dr. Ignacio Escalante Meza to learn about opiliones and treehoppers. Opiliones, known as "daddy long legs" in the US, are understudied arachnids known for their tenacious locom...

15 Heinä 202438min

Suosittua kategoriassa Tiede

rss-poliisin-mieli
tiedekulma-podcast
rss-mita-tulisi-tietaa
docemilia
filocast-filosofian-perusteet
menologeja-tutkimusmatka-vaihdevuosiin
rss-duodecim-lehti
sotataidon-ytimessa
rss-tiedetta-vai-tarinaa
utelias-mieli
radio-antro
rss-bios-podcast
rss-ranskaa-raakana
rss-kasvatuspsykologiaa-kaikille
rss-luontopodi-samuel-glassar-tutkii-luonnon-ihmeita
rss-lapsuuden-rakentajat-podcast
rss-sosiopodi