[MINI] Feed Forward Neural Networks
Data Skeptic24 Mar 2017

[MINI] Feed Forward Neural Networks

Feed Forward Neural Networks

In a feed forward neural network, neurons cannot form a cycle. In this episode, we explore how such a network would be able to represent three common logical operators: OR, AND, and XOR. The XOR operation is the interesting case.

Below are the truth tables that describe each of these functions.

AND Truth Table Input 1 Input 2 Output 0 0 0 0 1 0 1 0 0 1 1 1 OR Truth Table Input 1 Input 2 Output 0 0 0 0 1 1 1 0 1 1 1 1 XOR Truth Table Input 1 Input 2 Output 0 0 0 0 1 1 1 0 1 1 1 0

The AND and OR functions should seem very intuitive. Exclusive or (XOR) if true if and only if exactly single input is 1. Could a neural network learn these mathematical functions?

Let's consider the perceptron described below. First we see the visual representation, then the Activation function , followed by the formula for calculating the output.

Can this perceptron learn the AND function?

Sure. Let and

What about OR?

Yup. Let and

An infinite number of possible solutions exist, I just picked values that hopefully seem intuitive. This is also a good example of why the bias term is important. Without it, the AND function could not be represented.

How about XOR?

No. It is not possible to represent XOR with a single layer. It requires two layers. The image below shows how it could be done with two laters.

In the above example, the weights computed for the middle hidden node capture the essence of why this works. This node activates when recieving two positive inputs, thus contributing a heavy penalty to be summed by the output node. If a single input is 1, this node will not activate.

Universal approximation theorem tells us that any continuous function can be tightly approximated using a neural network with only a single hidden layer and a finite number of neurons. With this in mind, a feed forward neural network should be adaquet for any applications. However, in practice, other network architectures and the allowance of more hidden layers are empirically motivated.

Other types neural networks have less strict structal definitions. The various ways one might relax this constraint generate other classes of neural networks that often have interesting properties. We'll get into some of these in future mini-episodes.

Check out our recent blog post on how we're using Periscope Data cohort charts.

Thanks to Periscope Data for sponsoring this episode. More about them at periscopedata.com/skeptics

Denne episoden er hentet fra en åpen RSS-feed og er ikke publisert av Podme. Den kan derfor inneholde annonser.

Episoder(601)

Animal Decision Making

Animal Decision Making

On today's episode, we are joined by Aimee Dunlap. Aimee is an assistant professor at the University of Missouri–St. Louis and the interim director at the Whitney R. Harris World Ecology Center. Aimee...

12 Mar 202437min

Octopus Cognition

Octopus Cognition

We are joined by Tamar Gutnick, a visiting professor at the University of Naples Federico II, Napoli, Italy. She studies the octopus nervous system and their behavior, focusing on cognition and learni...

8 Mar 202438min

Optimal Foraging

Optimal Foraging

Claire Hemmingway, an assistant professor in the Department of Psychology and Ecology and Evolutionary Biology at the University of Tennessee in Knoxville, is our guest today. Her research is on decis...

28 Feb 202438min

Memory in Chess

Memory in Chess

On today's show, we are joined by our co-host, Becky Hansis-O'Neil. Becky is a Ph.D. student at the University of Missouri, St Louis, where she studies bumblebees and tarantulas to understand their le...

12 Feb 202448min

OpenWorm

OpenWorm

On this episode, we are joined by Stephen Larson, the CEO of MetaCell and an affiliate of the OpenWorm foundation. Stephen discussed what the Openworm project is about. They hope to use a digital C. e...

5 Feb 202434min

What the Antlion Knows

What the Antlion Knows

Our guest is Becky Hansis-O'Neil, a Ph.D. student at the University of Missouri, St Louis, and our co-host for the new "Animal Intelligence" season. Becky shares her background on how she got into the...

30 Jan 202441min

AI Roundtable

AI Roundtable

Kyle is joined by friends and former guests Pramit Choudhary and Frank Bell to have an open discussion of the impacts LLMs and machine learning have had in the past year on industry, and where things ...

17 Jan 202450min

Uncontrollable AI Risks

Uncontrollable AI Risks

We are joined by Darren McKee, a Policy Advisor and the host of Reality Check — a critical thinking podcast. Darren gave a background about himself and how he got into the AI space. Darren shared his ...

27 Des 202338min

Populært innen Vitenskap

fastlegen
tingenes-tilstand
jss
rss-zahid-ali-hjelper-deg
rekommandert
sinnsyn
rss-paradigmepodden
liberal-halvtime
vett-og-vitenskap-med-gaute-einevoll
forskningno
rss-overskuddsliv
villmarksliv
kvinnehelsepodden
nordnorsk-historie
grunnstoffene
tidlose-historier
rss-inn-til-kjernen-med-sunniva-rose
nevropodden
dekodet-2
rss-rekommandert