[MINI] Feed Forward Neural Networks
Data Skeptic24 Mars 2017

[MINI] Feed Forward Neural Networks

Feed Forward Neural Networks

In a feed forward neural network, neurons cannot form a cycle. In this episode, we explore how such a network would be able to represent three common logical operators: OR, AND, and XOR. The XOR operation is the interesting case.

Below are the truth tables that describe each of these functions.

AND Truth Table Input 1 Input 2 Output 0 0 0 0 1 0 1 0 0 1 1 1 OR Truth Table Input 1 Input 2 Output 0 0 0 0 1 1 1 0 1 1 1 1 XOR Truth Table Input 1 Input 2 Output 0 0 0 0 1 1 1 0 1 1 1 0

The AND and OR functions should seem very intuitive. Exclusive or (XOR) if true if and only if exactly single input is 1. Could a neural network learn these mathematical functions?

Let's consider the perceptron described below. First we see the visual representation, then the Activation function , followed by the formula for calculating the output.

Can this perceptron learn the AND function?

Sure. Let and

What about OR?

Yup. Let and

An infinite number of possible solutions exist, I just picked values that hopefully seem intuitive. This is also a good example of why the bias term is important. Without it, the AND function could not be represented.

How about XOR?

No. It is not possible to represent XOR with a single layer. It requires two layers. The image below shows how it could be done with two laters.

In the above example, the weights computed for the middle hidden node capture the essence of why this works. This node activates when recieving two positive inputs, thus contributing a heavy penalty to be summed by the output node. If a single input is 1, this node will not activate.

Universal approximation theorem tells us that any continuous function can be tightly approximated using a neural network with only a single hidden layer and a finite number of neurons. With this in mind, a feed forward neural network should be adaquet for any applications. However, in practice, other network architectures and the allowance of more hidden layers are empirically motivated.

Other types neural networks have less strict structal definitions. The various ways one might relax this constraint generate other classes of neural networks that often have interesting properties. We'll get into some of these in future mini-episodes.

Check out our recent blog post on how we're using Periscope Data cohort charts.

Thanks to Periscope Data for sponsoring this episode. More about them at periscopedata.com/skeptics

Det här avsnittet är hämtat från ett öppet RSS-flöde och publiceras inte av Podme. Det kan innehålla reklam.

Avsnitt(601)

Reducing the Impact of Ship Noise on Marine Mammals

Reducing the Impact of Ship Noise on Marine Mammals

Human shipping operations have increased significantly in the past few decades.  While that means international trade and cheap goods for humans, it also means the ocean has experienced an increase in...

1 Juli 202436min

Analysis of Unstructured Data

Analysis of Unstructured Data

Robbie Moon from the Georgia Tech Scheller College of Business joins us to discuss the analysis of unstructured data and the application of NLP methodologies towards financial data.

28 Juni 202427min

iNaturalist

iNaturalist

Have you ever participated in citizen science? Do you want to? One of the most popular platforms for crowdsourcing biodiversity data is iNaturalist. In addition to being a great science tool, the iNat...

24 Juni 202437min

Learn to Code

Learn to Code

Do you code or are you interested in learning to code? Join us today and hear from three individuals that are at very different stages of their coding journeys. Becky Hansis-O'Neill (also our co-host ...

18 Juni 202449min

Animal Computer Interaction

Animal Computer Interaction

You've heard of Human Computer Interaction (HCI), now get ready for Animal Computer Interaction (ACI). Ilyena has made a career developing computer interfaces for non-human animals. She has worked wit...

10 Juni 202442min

Ape Gestures

Ape Gestures

Cat observes great apes in the wild and in the lab to crack the code of their gestural communication. We discussed the challenges and benefits of studying apes in the wild vs in the lab. Cat also shar...

3 Juni 202449min

Evaluating AI Abilities

Evaluating AI Abilities

In this episode, Kozzy discusses his endeavors to compare the cognitive abilities of humans, animals, and AI programs. Specifically, we discussed object permanence, the ability to understand an object...

27 Maj 202449min

HMMs for Behavior

HMMs for Behavior

Théo Michelot has made a career out of tackling tough ecological questions using time-series data. How do scientists turn a series of GPS location observations over time into useful behavioral data? G...

20 Maj 202445min

Populärt inom Vetenskap

allt-du-velat-veta
dumma-manniskor
p3-dystopia
rss-ufobortom-rimligt-tvivel
ufo-sverige
kapitalet-en-podd-om-ekonomi
sexet
medicinvetarna
svd-nyhetsartiklar
rss-vetenskapsradion
hacka-livet
rss-vetenskapsradion-2
paranormalt-med-caroline-giertz
det-morka-psyket
ufo-sverige-2
rss-spraket
halsorevolutionen
rss-klotet
dumforklarat
ideer-som-forandrar-varlden