[MINI] Feed Forward Neural Networks
Data Skeptic24 Maalis 2017

[MINI] Feed Forward Neural Networks

Feed Forward Neural Networks

In a feed forward neural network, neurons cannot form a cycle. In this episode, we explore how such a network would be able to represent three common logical operators: OR, AND, and XOR. The XOR operation is the interesting case.

Below are the truth tables that describe each of these functions.

AND Truth Table Input 1 Input 2 Output 0 0 0 0 1 0 1 0 0 1 1 1 OR Truth Table Input 1 Input 2 Output 0 0 0 0 1 1 1 0 1 1 1 1 XOR Truth Table Input 1 Input 2 Output 0 0 0 0 1 1 1 0 1 1 1 0

The AND and OR functions should seem very intuitive. Exclusive or (XOR) if true if and only if exactly single input is 1. Could a neural network learn these mathematical functions?

Let's consider the perceptron described below. First we see the visual representation, then the Activation function , followed by the formula for calculating the output.

Can this perceptron learn the AND function?

Sure. Let and

What about OR?

Yup. Let and

An infinite number of possible solutions exist, I just picked values that hopefully seem intuitive. This is also a good example of why the bias term is important. Without it, the AND function could not be represented.

How about XOR?

No. It is not possible to represent XOR with a single layer. It requires two layers. The image below shows how it could be done with two laters.

In the above example, the weights computed for the middle hidden node capture the essence of why this works. This node activates when recieving two positive inputs, thus contributing a heavy penalty to be summed by the output node. If a single input is 1, this node will not activate.

Universal approximation theorem tells us that any continuous function can be tightly approximated using a neural network with only a single hidden layer and a finite number of neurons. With this in mind, a feed forward neural network should be adaquet for any applications. However, in practice, other network architectures and the allowance of more hidden layers are empirically motivated.

Other types neural networks have less strict structal definitions. The various ways one might relax this constraint generate other classes of neural networks that often have interesting properties. We'll get into some of these in future mini-episodes.

Check out our recent blog post on how we're using Periscope Data cohort charts.

Thanks to Periscope Data for sponsoring this episode. More about them at periscopedata.com/skeptics

Tämä jakso on lisätty Podme-palveluun avoimen RSS-syötteen kautta eikä se ole Podmen omaa tuotantoa. Siksi jakso saattaa sisältää mainontaa.

Jaksot(601)

Bioinspired Engineering

Bioinspired Engineering

Brian Taylor shares his research on magnetoreception. Animals like birds and sea turtles use magnetoreception to use the Earth's magnetic field for navigation, but it's not a sense that's well underst...

14 Touko 202438min

Modelling Evolution

Modelling Evolution

Modeling evolutionary processes goes way beyond the Hardy-Weinberg Equilibrium we all learned in biology class. Natural selection comes from many sources like resources availability, mate preferences,...

9 Touko 202441min

Behavioral Genetics

Behavioral Genetics

It's almost impossible to think about animal behavior without thinking of dogs! Our canine friends are a subspecies of wolf that has been co-evolving with us for tens of thousands of years. The transi...

30 Huhti 202447min

Signal in the Noise

Signal in the Noise

In this episode, we are joined by Barbara Webb and Anna Hadjitofi. Barbara runs the Insect Robotics lab at the University of Edinburgh, and Anna is a PhD student at the School of Informatics at the un...

25 Huhti 202441min

Pose Tracking

Pose Tracking

Many researchers and students have painstakingly labeled precise details about the body positions of the creatures they study. Can AI be used for this labeling? Of course it can! Today's episode discu...

16 Huhti 202450min

Modeling Group Behavior

Modeling Group Behavior

Our guest in this episode is Sebastien Motsch, an assistant professor at Arizona State University, working in the School of Mathematical and Statistical Science. He works on modeling self-organized bi...

8 Huhti 202440min

Advances in Data Loggers

Advances in Data Loggers

Our guest in this episode is Ryan Hanscom. Ryan is a Ph.D. candidate in a joint doctoral evolution program at San Diego State University and the University of California, Riverside. He is a terrestria...

25 Maalis 202435min

What You Know About Intelligence is Wrong (fixed)

What You Know About Intelligence is Wrong (fixed)

We are joined by Hank Schlinger, a professor of psychology at California State University, Los Angeles. His research revolves around theoretical issues in psychology and behavioral analysis.  Hank est...

20 Maalis 202441min

Suosittua kategoriassa Tiede

rss-poliisin-mieli
tiedekulma-podcast
rss-mita-tulisi-tietaa
docemilia
filocast-filosofian-perusteet
menologeja-tutkimusmatka-vaihdevuosiin
rss-duodecim-lehti
sotataidon-ytimessa
rss-tiedetta-vai-tarinaa
utelias-mieli
radio-antro
rss-bios-podcast
rss-ranskaa-raakana
rss-kasvatuspsykologiaa-kaikille
rss-luontopodi-samuel-glassar-tutkii-luonnon-ihmeita
rss-lapsuuden-rakentajat-podcast
rss-sosiopodi