[MINI] Feed Forward Neural Networks
Data Skeptic24 Mars 2017

[MINI] Feed Forward Neural Networks

Feed Forward Neural Networks

In a feed forward neural network, neurons cannot form a cycle. In this episode, we explore how such a network would be able to represent three common logical operators: OR, AND, and XOR. The XOR operation is the interesting case.

Below are the truth tables that describe each of these functions.

AND Truth Table Input 1 Input 2 Output 0 0 0 0 1 0 1 0 0 1 1 1 OR Truth Table Input 1 Input 2 Output 0 0 0 0 1 1 1 0 1 1 1 1 XOR Truth Table Input 1 Input 2 Output 0 0 0 0 1 1 1 0 1 1 1 0

The AND and OR functions should seem very intuitive. Exclusive or (XOR) if true if and only if exactly single input is 1. Could a neural network learn these mathematical functions?

Let's consider the perceptron described below. First we see the visual representation, then the Activation function , followed by the formula for calculating the output.

Can this perceptron learn the AND function?

Sure. Let and

What about OR?

Yup. Let and

An infinite number of possible solutions exist, I just picked values that hopefully seem intuitive. This is also a good example of why the bias term is important. Without it, the AND function could not be represented.

How about XOR?

No. It is not possible to represent XOR with a single layer. It requires two layers. The image below shows how it could be done with two laters.

In the above example, the weights computed for the middle hidden node capture the essence of why this works. This node activates when recieving two positive inputs, thus contributing a heavy penalty to be summed by the output node. If a single input is 1, this node will not activate.

Universal approximation theorem tells us that any continuous function can be tightly approximated using a neural network with only a single hidden layer and a finite number of neurons. With this in mind, a feed forward neural network should be adaquet for any applications. However, in practice, other network architectures and the allowance of more hidden layers are empirically motivated.

Other types neural networks have less strict structal definitions. The various ways one might relax this constraint generate other classes of neural networks that often have interesting properties. We'll get into some of these in future mini-episodes.

Check out our recent blog post on how we're using Periscope Data cohort charts.

Thanks to Periscope Data for sponsoring this episode. More about them at periscopedata.com/skeptics

Det här avsnittet är hämtat från ett öppet RSS-flöde och publiceras inte av Podme. Det kan innehålla reklam.

Avsnitt(601)

Data Science at ZestFinance with Marick Sinay

Data Science at ZestFinance with Marick Sinay

Marick Sinay from ZestFianance is our guest this weel.  This episode explores how data science techniques are applied in the financial world, specifically in assessing credit worthiness.

12 Sep 201431min

[MINI] Decision Tree Learning

[MINI] Decision Tree Learning

Linhda and Kyle talk about Decision Tree Learning in this miniepisode.  Decision Tree Learning is the algorithmic process of trying to generate an optimal decision tree to properly classify or forecas...

5 Sep 201413min

Jackson Pollock Authentication Analysis with Kate Jones-Smith

Jackson Pollock Authentication Analysis with Kate Jones-Smith

Our guest this week is Hamilton physics professor Kate Jones-Smith who joins us to discuss the evidence for the claim that drip paintings of Jackson Pollock contain fractal patterns. This hypothesis o...

29 Aug 201449min

[MINI] Noise!!

[MINI] Noise!!

Our topic for this week is "noise" as in signal vs. noise.  This is not a signal processing discussions, but rather a brief introduction to how the work noise is used to describe how much information ...

22 Aug 201416min

Guerilla Skepticism on Wikipedia with Susan Gerbic

Guerilla Skepticism on Wikipedia with Susan Gerbic

Our guest this week is Susan Gerbic. Susan is a skeptical activist involved in many activities, the one we focus on most in this episode is Guerrilla Skepticism on Wikipedia, an organization working t...

15 Aug 20141h 9min

[MINI] Ant Colony Optimization

[MINI] Ant Colony Optimization

In this week's mini episode, Linhda and Kyle discuss Ant Colony Optimization - a numerical / stochastic optimization technique which models its search after the process ants employ in using random wal...

8 Aug 201415min

Data in Healthcare IT with Shahid Shah

Data in Healthcare IT with Shahid Shah

Our guest this week is Shahid Shah. Shahid is CEO at Netspective, and writes three blogs: Health Care Guy, Shahid Shah, and HitSphere - the Healthcare IT Supersite. During the program, Kyle recommend...

1 Aug 201457min

[MINI] Cross Validation

[MINI] Cross Validation

This miniepisode discusses the technique called Cross Validation - a process by which one randomly divides up a dataset into numerous small partitions. Next, (typically) one is held out, and the rest ...

25 Juli 20140s

Populärt inom Vetenskap

allt-du-velat-veta
dumma-manniskor
p3-dystopia
rss-ufobortom-rimligt-tvivel
ufo-sverige
kapitalet-en-podd-om-ekonomi
sexet
medicinvetarna
svd-nyhetsartiklar
rss-vetenskapsradion
hacka-livet
rss-vetenskapsradion-2
paranormalt-med-caroline-giertz
det-morka-psyket
ufo-sverige-2
rss-spraket
halsorevolutionen
rss-klotet
dumforklarat
ideer-som-forandrar-varlden