[MINI] Feed Forward Neural Networks
Data Skeptic24 Mar 2017

[MINI] Feed Forward Neural Networks

Feed Forward Neural Networks

In a feed forward neural network, neurons cannot form a cycle. In this episode, we explore how such a network would be able to represent three common logical operators: OR, AND, and XOR. The XOR operation is the interesting case.

Below are the truth tables that describe each of these functions.

AND Truth Table Input 1 Input 2 Output 0 0 0 0 1 0 1 0 0 1 1 1 OR Truth Table Input 1 Input 2 Output 0 0 0 0 1 1 1 0 1 1 1 1 XOR Truth Table Input 1 Input 2 Output 0 0 0 0 1 1 1 0 1 1 1 0

The AND and OR functions should seem very intuitive. Exclusive or (XOR) if true if and only if exactly single input is 1. Could a neural network learn these mathematical functions?

Let's consider the perceptron described below. First we see the visual representation, then the Activation function , followed by the formula for calculating the output.

Can this perceptron learn the AND function?

Sure. Let and

What about OR?

Yup. Let and

An infinite number of possible solutions exist, I just picked values that hopefully seem intuitive. This is also a good example of why the bias term is important. Without it, the AND function could not be represented.

How about XOR?

No. It is not possible to represent XOR with a single layer. It requires two layers. The image below shows how it could be done with two laters.

In the above example, the weights computed for the middle hidden node capture the essence of why this works. This node activates when recieving two positive inputs, thus contributing a heavy penalty to be summed by the output node. If a single input is 1, this node will not activate.

Universal approximation theorem tells us that any continuous function can be tightly approximated using a neural network with only a single hidden layer and a finite number of neurons. With this in mind, a feed forward neural network should be adaquet for any applications. However, in practice, other network architectures and the allowance of more hidden layers are empirically motivated.

Other types neural networks have less strict structal definitions. The various ways one might relax this constraint generate other classes of neural networks that often have interesting properties. We'll get into some of these in future mini-episodes.

Check out our recent blog post on how we're using Periscope Data cohort charts.

Thanks to Periscope Data for sponsoring this episode. More about them at periscopedata.com/skeptics

Denne episoden er hentet fra en åpen RSS-feed og er ikke publisert av Podme. Den kan derfor inneholde annonser.

Episoder(601)

4 out of 5 Data Scientists Agree

4 out of 5 Data Scientists Agree

This episode kicks off the new season of the show, Data Skeptic: Surveys.  Linhda rejoins the show for a conversation with Kyle about her experience taking surveys and what questions she has for the s...

10 Jan 202328min

Crowdfunded Board Games

Crowdfunded Board Games

It may be intuitive to think crowdfunding a project drives its innovation and novelty, but there are no empirical studies that prove this. On the show, Johannes Wachs shares his research that sought t...

26 Des 202234min

Russian Election Interference Effectiveness

Russian Election Interference Effectiveness

There were reports of Russia's interference in the 2016 US elections. In today's episode, Koustuv Saha, a researcher at Microsoft Research walks us through the effect of targeted ads for political cam...

19 Des 202241min

Placement Laundering Fraud

Placement Laundering Fraud

There is an unsung kind of ad fraud brewing in the ad tech space — placement laundering fraud. On the show, Jeff Kline discusses what placement laundering fraud is, how it can be identified, and possi...

15 Des 202232min

Data Clean Rooms

Data Clean Rooms

Bosko Milekic, the Co-founder of Optable, a data collaboration platform for the media and advertising industry, joins us today. Bosko talked about the clean rooms, the technology driving data privacy ...

12 Des 202231min

Dark Patterns in Site Design

Dark Patterns in Site Design

Kerstin Bongard-Blanchy is a Research Associate at the University of Luxembourg. She joins us to discuss her study that investigated dark patterns in web designs. She discussed the results, the effect...

5 Des 202234min

Internet Advertising Bureau Media Lab

Internet Advertising Bureau Media Lab

We are joined by Anthony Katsur, the CEO of IAB Tech Lab. Anthony discusses standards within the ad tech industry. He explained how IAB Tech Lab set and propagates global standards, actions to ensure ...

3 Des 202237min

Your Mouse Reveals Your Gender and Age

Your Mouse Reveals Your Gender and Age

When we navigate a webpage, it is fairly easy for our mouse movement to be tracked and collected. Today, Luis Leiva, a Professor of Computer Science discusses how these mouse tracking data can be used...

28 Nov 202239min

Populært innen Vitenskap

fastlegen
tingenes-tilstand
liberal-halvtime
jss
rss-zahid-ali-hjelper-deg
sinnsyn
forskningno
villmarksliv
rekommandert
rss-overskuddsliv
rss-paradigmepodden
vett-og-vitenskap-med-gaute-einevoll
nordnorsk-historie
tidlose-historier
dekodet-2
rss-inn-til-kjernen-med-sunniva-rose
fjellsportpodden
kvinnehelsepodden
diagnose
rss-nysgjerrige-norge