[MINI] Feed Forward Neural Networks
Data Skeptic24 Maalis 2017

[MINI] Feed Forward Neural Networks

Feed Forward Neural Networks

In a feed forward neural network, neurons cannot form a cycle. In this episode, we explore how such a network would be able to represent three common logical operators: OR, AND, and XOR. The XOR operation is the interesting case.

Below are the truth tables that describe each of these functions.

AND Truth Table Input 1 Input 2 Output 0 0 0 0 1 0 1 0 0 1 1 1 OR Truth Table Input 1 Input 2 Output 0 0 0 0 1 1 1 0 1 1 1 1 XOR Truth Table Input 1 Input 2 Output 0 0 0 0 1 1 1 0 1 1 1 0

The AND and OR functions should seem very intuitive. Exclusive or (XOR) if true if and only if exactly single input is 1. Could a neural network learn these mathematical functions?

Let's consider the perceptron described below. First we see the visual representation, then the Activation function , followed by the formula for calculating the output.

Can this perceptron learn the AND function?

Sure. Let and

What about OR?

Yup. Let and

An infinite number of possible solutions exist, I just picked values that hopefully seem intuitive. This is also a good example of why the bias term is important. Without it, the AND function could not be represented.

How about XOR?

No. It is not possible to represent XOR with a single layer. It requires two layers. The image below shows how it could be done with two laters.

In the above example, the weights computed for the middle hidden node capture the essence of why this works. This node activates when recieving two positive inputs, thus contributing a heavy penalty to be summed by the output node. If a single input is 1, this node will not activate.

Universal approximation theorem tells us that any continuous function can be tightly approximated using a neural network with only a single hidden layer and a finite number of neurons. With this in mind, a feed forward neural network should be adaquet for any applications. However, in practice, other network architectures and the allowance of more hidden layers are empirically motivated.

Other types neural networks have less strict structal definitions. The various ways one might relax this constraint generate other classes of neural networks that often have interesting properties. We'll get into some of these in future mini-episodes.

Check out our recent blog post on how we're using Periscope Data cohort charts.

Thanks to Periscope Data for sponsoring this episode. More about them at periscopedata.com/skeptics

Tämä jakso on lisätty Podme-palveluun avoimen RSS-syötteen kautta eikä se ole Podmen omaa tuotantoa. Siksi jakso saattaa sisältää mainontaa.

Jaksot(601)

Measuring Web Search Behavior

Measuring Web Search Behavior

On the show, Aleksandra Urman and Mykola Makhortykh join us to discuss their work on the comparative analysis of web search behavior using web tracking data. They shared interesting results from their...

21 Marras 202236min

StrategyQA and Big Bench

StrategyQA and Big Bench

Did Aristotle Use a Laptop?  That's a question from the StrategyQA benchmark which highlights the stretch goals for current artificial intelligence systems.  Answering a question like that requires se...

18 Marras 202241min

Ad Blockers Effect on News Consumption

Ad Blockers Effect on News Consumption

While at first glance, the use of ad blockers drops the revenue of news publishers, this may not be completely true. On the show today, Shunyao Yan, an Assistant Professor in Marketing at Leavey Schoo...

14 Marras 202238min

Your Consent is Worth 75 Euros a Year

Your Consent is Worth 75 Euros a Year

People who do not want their data tracked and shared online can pay a token for a cookie paywall. But are the websites keeping to their side of the bargain? Victor Morel, a Postdoc candidate at the Ch...

7 Marras 202224min

Automated Email Generation for Targeted Attacks

Automated Email Generation for Targeted Attacks

The advancement of generative language models has been a force for good, but also for evil. On the show, Avisha Das, a post-doctoral scholar at the University of Texas Health Center, joins us to discu...

31 Loka 202245min

Tribal Marketing

Tribal Marketing

Peter Gloor, a Research Scientist at the MIT Center for Collective Intelligence, takes us on a new world of tribe classification. He extensively discussed the need for such classification on the inter...

24 Loka 202237min

Nano-targetted Facebook Ads

Nano-targetted Facebook Ads

17 Loka 202244min

Debiasing GPT-3 Job Ads

Debiasing GPT-3 Job Ads

We hear about the impeccable achievements of GPT-3 models, but such large generative models come with their bias. On the show today, Conrad Borchers, a Ph.D. student in Human-Computer Interaction, joi...

10 Loka 202248min

Suosittua kategoriassa Tiede

tiedekulma-podcast
rss-poliisin-mieli
docemilia
rss-mita-tulisi-tietaa
filocast-filosofian-perusteet
rss-lapsuuden-rakentajat-podcast
rss-tiedetta-vai-tarinaa
rss-lihavuudesta-podcast
sotataidon-ytimessa
radio-antro
menologeja-tutkimusmatka-vaihdevuosiin
rss-bios-podcast
rss-duodecim-lehti
rss-metsantuntijat-podcast
rss-luontopodi-samuel-glassar-tutkii-luonnon-ihmeita