[MINI] Feed Forward Neural Networks
Data Skeptic24 Mar 2017

[MINI] Feed Forward Neural Networks

Feed Forward Neural Networks

In a feed forward neural network, neurons cannot form a cycle. In this episode, we explore how such a network would be able to represent three common logical operators: OR, AND, and XOR. The XOR operation is the interesting case.

Below are the truth tables that describe each of these functions.

AND Truth Table Input 1 Input 2 Output 0 0 0 0 1 0 1 0 0 1 1 1 OR Truth Table Input 1 Input 2 Output 0 0 0 0 1 1 1 0 1 1 1 1 XOR Truth Table Input 1 Input 2 Output 0 0 0 0 1 1 1 0 1 1 1 0

The AND and OR functions should seem very intuitive. Exclusive or (XOR) if true if and only if exactly single input is 1. Could a neural network learn these mathematical functions?

Let's consider the perceptron described below. First we see the visual representation, then the Activation function , followed by the formula for calculating the output.

Can this perceptron learn the AND function?

Sure. Let and

What about OR?

Yup. Let and

An infinite number of possible solutions exist, I just picked values that hopefully seem intuitive. This is also a good example of why the bias term is important. Without it, the AND function could not be represented.

How about XOR?

No. It is not possible to represent XOR with a single layer. It requires two layers. The image below shows how it could be done with two laters.

In the above example, the weights computed for the middle hidden node capture the essence of why this works. This node activates when recieving two positive inputs, thus contributing a heavy penalty to be summed by the output node. If a single input is 1, this node will not activate.

Universal approximation theorem tells us that any continuous function can be tightly approximated using a neural network with only a single hidden layer and a finite number of neurons. With this in mind, a feed forward neural network should be adaquet for any applications. However, in practice, other network architectures and the allowance of more hidden layers are empirically motivated.

Other types neural networks have less strict structal definitions. The various ways one might relax this constraint generate other classes of neural networks that often have interesting properties. We'll get into some of these in future mini-episodes.

Check out our recent blog post on how we're using Periscope Data cohort charts.

Thanks to Periscope Data for sponsoring this episode. More about them at periscopedata.com/skeptics

Denne episoden er hentet fra en åpen RSS-feed og er ikke publisert av Podme. Den kan derfor inneholde annonser.

Episoder(601)

Skeptical Survey Interpretation

Skeptical Survey Interpretation

Kyle shares his own perspectives on challenges getting insight from surveys. The discussion ranges from commentary on the market research industry to specific advice for detecting disingenuous or frau...

10 Mai 202321min

The Gallup Poll

The Gallup Poll

Jeff Jones, a Senior Editor at Gallup, joins us today. His conversation with Kyle spanned a range of topics on Gallup's poll creation process. He discussed how Gallup generates unbiased questionnaires...

1 Mai 202340min

Inclusive Study Group Formation at Scale

Inclusive Study Group Formation at Scale

Gireeja Ranade, a University of California at Berkeley professor, speaks with us today. She presented her study on implementing inclusive study groups at scale and shared the observed student performa...

25 Apr 202332min

The PhilPapers Survey

The PhilPapers Survey

Today, we are joined by David Bourget. David is an Associate Professor in Philosophy at Western University in London, Ontario. David is also the co-director of the PhilPapers Foundation and Director o...

21 Apr 202331min

Non-Response Bias

Non-Response Bias

Today's show focused on an essential part of surveys — missing values. This is typically caused by a low response rate or non-response from respondents. Yajuan Si is a Research Associate Professor at ...

10 Apr 202335min

Measuring Trust in Robots with Likert Scales

Measuring Trust in Robots with Likert Scales

We are joined by two guests today, Mariah, a Ph.D. student in the CORE Robotics Lab at Georgia Tech, and Matthew Gombolay, the Director of the CORE Robotics Lab. They both discuss practices for measur...

3 Apr 202347min

CAREER Prediction

CAREER Prediction

Ever wondered what your next career would be? Today, Keyon Vafa, a computer science Ph.D. student at Columbia University, joins us to discuss his latest research on developing a machine-learning model...

27 Mar 202340min

The Panel Study of Income Dynamics

The Panel Study of Income Dynamics

Noura Insolera, a Research Investigator with the Panel Study of Income Dynamics (PSID), joins us to share how PSID conducts longitudinal household surveys. She also shared some interesting findings fr...

21 Mar 202334min

Populært innen Vitenskap

fastlegen
tingenes-tilstand
jss
rss-zahid-ali-hjelper-deg
liberal-halvtime
sinnsyn
rekommandert
forskningno
villmarksliv
rss-paradigmepodden
vett-og-vitenskap-med-gaute-einevoll
rss-overskuddsliv
nordnorsk-historie
tidlose-historier
rss-inn-til-kjernen-med-sunniva-rose
dekodet-2
kvinnehelsepodden
grunnstoffene
fjellsportpodden
rss-nysgjerrige-norge