[MINI] Feed Forward Neural Networks
Data Skeptic24 Mars 2017

[MINI] Feed Forward Neural Networks

Feed Forward Neural Networks

In a feed forward neural network, neurons cannot form a cycle. In this episode, we explore how such a network would be able to represent three common logical operators: OR, AND, and XOR. The XOR operation is the interesting case.

Below are the truth tables that describe each of these functions.

AND Truth Table Input 1 Input 2 Output 0 0 0 0 1 0 1 0 0 1 1 1 OR Truth Table Input 1 Input 2 Output 0 0 0 0 1 1 1 0 1 1 1 1 XOR Truth Table Input 1 Input 2 Output 0 0 0 0 1 1 1 0 1 1 1 0

The AND and OR functions should seem very intuitive. Exclusive or (XOR) if true if and only if exactly single input is 1. Could a neural network learn these mathematical functions?

Let's consider the perceptron described below. First we see the visual representation, then the Activation function , followed by the formula for calculating the output.

Can this perceptron learn the AND function?

Sure. Let and

What about OR?

Yup. Let and

An infinite number of possible solutions exist, I just picked values that hopefully seem intuitive. This is also a good example of why the bias term is important. Without it, the AND function could not be represented.

How about XOR?

No. It is not possible to represent XOR with a single layer. It requires two layers. The image below shows how it could be done with two laters.

In the above example, the weights computed for the middle hidden node capture the essence of why this works. This node activates when recieving two positive inputs, thus contributing a heavy penalty to be summed by the output node. If a single input is 1, this node will not activate.

Universal approximation theorem tells us that any continuous function can be tightly approximated using a neural network with only a single hidden layer and a finite number of neurons. With this in mind, a feed forward neural network should be adaquet for any applications. However, in practice, other network architectures and the allowance of more hidden layers are empirically motivated.

Other types neural networks have less strict structal definitions. The various ways one might relax this constraint generate other classes of neural networks that often have interesting properties. We'll get into some of these in future mini-episodes.

Check out our recent blog post on how we're using Periscope Data cohort charts.

Thanks to Periscope Data for sponsoring this episode. More about them at periscopedata.com/skeptics

Det här avsnittet är hämtat från ett öppet RSS-flöde och publiceras inte av Podme. Det kan innehålla reklam.

Avsnitt(601)

Practicing and Communicating Data Science with Jeff Stanton

Practicing and Communicating Data Science with Jeff Stanton

Jeff Stanton joins me in this episode to discuss his book An Introduction to Data Science, and some of the unique challenges and issues faced by someone doing applied data science. A challenge to any ...

24 Okt 201436min

[MINI] The T-Test

[MINI] The T-Test

The t-test is this week's mini-episode topic. The t-test is a statistical testing procedure used to determine if the mean of two datasets differs by a statistically significant amount. We discuss how ...

17 Okt 201417min

Data Myths with Karl Mamer

Data Myths with Karl Mamer

This week I'm joined by Karl Mamer to discuss the data behind three well known urban legends. Did a large blackout in New York and surrounding areas result in a baby boom nine months later? Do sublimi...

10 Okt 201448min

Contest Announcement

Contest Announcement

The Data Skeptic Podcast is launching a contest- not one of chance, but one of skill. Listeners are encouraged to put their data science skills to good use, or if all else fails, guess! The contest w...

8 Okt 201412min

[MINI] Selection Bias

[MINI] Selection Bias

A discussion about conducting US presidential election polls helps frame a converation about selection bias.

3 Okt 201414min

[MINI] Confidence Intervals

[MINI] Confidence Intervals

Commute times and BBQ invites help frame a discussion about the statistical concept of confidence intervals.

26 Sep 201411min

[MINI] Value of Information

[MINI] Value of Information

A discussion about getting ready in the morning, negotiating a used car purchase, and selecting the best AirBnB place to stay at help frame a conversation about the decision theoretic principal known ...

19 Sep 201414min

Game Science Dice with Louis Zocchi

Game Science Dice with Louis Zocchi

In this bonus episode, guest Louis Zocchi discusses his background in the gaming industry, specifically, how he became a manufacturer of dice designed to produce statistically uniform outcomes.  Duri...

17 Sep 201447min

Populärt inom Vetenskap

allt-du-velat-veta
dumma-manniskor
p3-dystopia
rss-ufobortom-rimligt-tvivel
ufo-sverige
kapitalet-en-podd-om-ekonomi
sexet
medicinvetarna
svd-nyhetsartiklar
rss-vetenskapsradion
hacka-livet
rss-vetenskapsradion-2
paranormalt-med-caroline-giertz
det-morka-psyket
ufo-sverige-2
rss-spraket
halsorevolutionen
rss-klotet
dumforklarat
ideer-som-forandrar-varlden